Simultaneous Localization and Mapping (SLAM)

SLAM is a technique for Robots to simultaneously do Localization and Mapping.

SLAM stands for “Simultaneous Localization and Mapping”. It is a computational problem in robotics and computer vision that involves creating a map of an unknown environment while at the same time locating the robot or camera within that environment.

Resources

First introduced to this idea by George Hotz through his livestream livecoding SLAM.

More details and resources in Visual SLAM.

Two main implementations

  1. EKF-SLAM (for Online)
  2. Graph-Based SLAM (for Offline)
    • Full trajectories are estimated using a complete set of measurements

Main Steps from ChatGPT

  1. Data collection
  2. Data association Scan Matching
  3. State Estimation and State Update

When SLAM doesn't work...

When SLAM doesn’t work for me, I always find that it is an odometry problem, because you end up with these really bad estimates of the positions…

We went through a bunch of possible explanations:

  • we were turning too sharply, and the turning was was wrong, therefore wheel slip
  • The intervals recording the points was to large (turns out this is actually a good thing)

Implementation

2D SLAM

Visual SLAM

Loop Closure

Loop closure enables SLAM to work. So in real life, doing SLAM without a map is pretty hard.

  • Don’t make a false Loop Closure. It’s better to miss a real one than to add a fake one, since loop closures have a lot of weight

There is error/uncertainty in the lidar and odometry measurements.

We generate a binary Occupancy Grid, where the white cells are where you are fully confidence, black are occupied, and grey are unknown.

2D SLAM Study

TODO: insert the confluence studies?