Simultaneous Localization and Mapping (SLAM)
SLAM stands for “Simultaneous Localization and Mapping”. It is a computational problem in robotics and computer vision that involves creating a map of an unknown environment while at the same time locating the robot or camera within that environment.
- A really great 2-part paper
More details and resources in Visual SLAM.
Two main implementations
- EKF-SLAM (for Online)
- Use Scan Matching
- Graph-Based SLAM (for Offline)
- Full trajectories are estimated using a complete set of measurements
Main Steps from ChatGPT
When SLAM doesn't work...
When SLAM doesn’t work for me, I always find that it is an odometry problem, because you end up with these really bad estimates of the positions…
We went through a bunch of possible explanations:
- we were turning too sharply, and the turning was was wrong, therefore wheel slip
- The intervals recording the points was to large (turns out this is actually a good thing)
- Gmapping from OpenSlam
- Google Cartographer
Loop closure enables SLAM to work. So in real life, doing SLAM without a map is pretty hard.
- Don’t make a false Loop Closure. It’s better to miss a real one than to add a fake one, since loop closures have a lot of weight
There is error/uncertainty in the lidar and odometry measurements.
We generate a binary Occupancy Grid, where the white cells are where you are fully confidence, black are occupied, and grey are unknown.
2D SLAM Study
TODO: insert the confluence studies?