Simultaneous Localization and Mapping (SLAM)
SLAM is a technique for Robots to simultaneously do Localization and Mapping.
SLAM stands for “Simultaneous Localization and Mapping”. It is a computational problem in robotics and computer vision that involves creating a map of an unknown environment while at the same time locating the robot or camera within that environment.
For camera-based SLAM, see Visual SLAM.
Resources
- https://theairlab.org/tartanslamseries/
- https://www.mathworks.com/discovery/slam.html
- https://navigation.ros.org/tutorials/docs/navigation2_with_slam.html?highlight=slam
- A really great 2-part paper
- http://ais.informatik.uni-freiburg.de/teaching/ws11/robotics2/pdfs/ls-slam-tutorial.pdf
- Understanding SLAM Using Pose Graph Optimization | Autonomous Navigation, Part 3
First introduced to this idea by George Hotz through his livestream livecoding SLAM.
Approaches to SLAM:
- Kalman Filter based
- Particle Filter
- Graph-Based SLAM (MODERN technique)
Concepts
Implementations
2D SLAM
- slam_toolbox
- Gmapping from OpenSlam
- Google Cartographer
2D SLAM Study
TODO: insert the confluence studies?
When SLAM doesn't work...
When SLAM doesn’t work for me, I always find that it is an odometry problem, because you end up with these really bad estimates of the positions…
We went through a bunch of possible explanations:
- we were turning too sharply, and the turning was was wrong, therefore wheel slip
- The intervals recording the points was to large (turns out this is actually a good thing)