Simultaneous Localization and Mapping (SLAM)
SLAM is a technique for Robots to simultaneously do Localization and Mapping.
SLAM stands for “Simultaneous Localization and Mapping”. It is a computational problem in robotics and computer vision that involves creating a map of an unknown environment while at the same time locating the robot or camera within that environment.
Resources
- https://theairlab.org/tartanslamseries/
- https://www.mathworks.com/discovery/slam.html
- https://navigation.ros.org/tutorials/docs/navigation2_with_slam.html?highlight=slam
- A really great 2-part paper
First introduced to this idea by George Hotz through his livestream livecoding SLAM.
More details and resources in Visual SLAM.
Two main implementations
- EKF-SLAM (for Online)
- Use Scan Matching
- Graph-Based SLAM (for Offline)
- Full trajectories are estimated using a complete set of measurements
Main Steps from ChatGPT
- Data collection
- Data association -> Scan Matching
- State Estimation and State Update
When SLAM doesn't work...
When SLAM doesn’t work for me, I always find that it is an odometry problem, because you end up with these really bad estimates of the positions…
We went through a bunch of possible explanations:
- we were turning too sharply, and the turning was was wrong, therefore wheel slip
- The intervals recording the points was to large (turns out this is actually a good thing)
Implementation
2D SLAM
- slam_toolbox
- Gmapping from OpenSlam
- Google Cartographer
Loop Closure
Loop closure enables SLAM to work. So in real life, doing SLAM without a map is pretty hard.
- Don’t make a false Loop Closure. It’s better to miss a real one than to add a fake one, since loop closures have a lot of weight
There is error/uncertainty in the lidar and odometry measurements.
We generate a binary Occupancy Grid, where the white cells are where you are fully confidence, black are occupied, and grey are unknown.
2D SLAM Study
TODO: insert the confluence studies?