Met at the Comma hackathon.

Start with Camera Calibration using Zhang’s methods

Pre-req to ORB-SLAM: Fundamental Matrix and Homography Matrix Epipolar Geometry

Types of SLAM

  1. Sparse/ (Feature based)
  2. Sparse (but intensity based), called semi-direct method (SVO)
    1. Source:
  3. Dense (ElasticFusion like, COLMAP fits under this as well
    • LSD-SLAM

We also say like indirect (feature-based) and direct SLAM (intensity based)

Tip from sachin: Focus on Sparse, because dense usually builds on top of sparse.

For robotics, 1-2cm tolerance, but in VR they need SUPER SUPER tight tolerance

  1. ORB-SLAM: track features

Projective 3-Point Algorithm (for marker tracking, infer camera position)

So it seems that there are 2 playlists that are super helpful:

Actually Mobile sensing 1 has some pretty good videos too:

  • Includes MPC and Kalman Filter

Start with this: (understand backend optimization)

For Cyrill (follow the order):

  1. Camera Calibration
  2. (optional, interested only)
  3. (this is different from Feature Matching, it’s just correlation between two images, not really needed)
  4. Feature Points (part 1 and 2)
  5. Homogeneous (quickly glance, see if you understand)
  6. DLT (super important):
  7. Zhang’s method,