Robot Learning Reading Group

Starting this robot learning reading group with New Systems, and potentially going to host more after.

Similar events:

Questions we should answer as we read these papers:

  • What are the authors trying to do? Articulate their objectives.
  • How was it done prior to their work, and what were the limits of current practice?
  • What is new in their approach, and why do they think it will be successful?
  • What are the mid-term and final “exams” to check for success? (i.e., How is the method evaluated?)
  • What are limitations that the author mentions (any that they omit)?

About

​Meetup to discuss state-of-the-art research on robot learning, similar to the Toronto ML/Systems Reading Group and Vector Institute’s Machine Learning Lunches- list of topics & articles below - all are welcome! 🎉

Annotate papers through Alphaxiv:

Week ? - Symbolic Planning

Shah, et al. 2025. From Real World to Logic and Back: Learning Generalizable Relational Concepts For Long Horizon Robot Planning.

Week ? - Tactile Sensing

Sharma, et al. 2025. Self-supervised perception for tactile skin covered dexterous hands.

Week ? - Out of distribution

One of the key challenges is robustness. When robots get out of distribution. And how it recovers. I would say this is the key challenge right now.

offline RL enables learning from suboptimal demonstrations, not necessarily being able to be robust to edge cases.

Week ? - Cross-Embodiment

Workshops:

Yang, et al., 2024. Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation.

Week ? - Egocentric Papers / Learning from human videos

Grauman, et al. 2022. Ego4D: Around the World in 3,000 Hours of Egocentric Video. Kareer, et al. 2024. EgoMimic: Scaling Imitation Learning via Egocentric Video. Athalye, et al. 2025. From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models. Doshi, et al. 2024. Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation. Patel, et al. 2024. GET-Zero: Graph Embodiment Transformer for Zero-shot Embodiment Generalization. Niu, et al. 2025. Pre-training Auto-regressive Robotic Models with 4D Representations. Liu, et al. 2025. EgoZero: Robot Learning from Smart Glasses. Ye, et al. 2025. MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA.

Week 6 - CoRL Shiza top paper

Week 5 - Data Collection

Grauman, et al. 2022. Ego4D: Around the World in 3,000 Hours of Egocentric Video. Wu, et al., 2023. GELLO: A General, Low-Cost, and Intuitive Teleoperation Framework for Robot Manipulators. Zhao, et al. 2023. Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware. Chi, et al., 2024. Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots. Etukuru, et. al. 2024. Robot Utility Models: General Policies for Zero-Shot Deployment in New Environments. Iyer, et al. 2024. OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation. Xu, et al., 2025. DexUMI: Using Human Hand as the Universal Manipulation Interface for Dexterous Manipulation. Si, et al., 2025. ExoStart: Efficient learning for dexterous manipulation with sensorized exoskeleton demonstrations. Wu, et al., 2025. MagiClaw: A Dual-Use, Vision-Based Soft Gripper for Bridging the Human Demonstration to Robotic Deployment Gap..

Workshops:

Week 4 - Swarm Robotics

https://luma.com/fjbqcg94

LLM2Swarm: Robot Swarms that Responsively Reason, Plan, and Collaborate through LLMs. Swarm Robotic Behaviors and Current Applications. HeRo 2.0: A Low-Cost Robot for Swarm Robotics Research

Week 3 - Real2Sim + Sim2Real / Whole-Body Control

Kalaria, et al., 2025. DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion by Ayush Garg

Additional Reading List:

Background knowledge:

Extra articles:

Week 2 - World Models

Topic - World Models

Additional Reading List:

Workshops:

Week 1 - Robot Foundation Models

Additional Reading List

Additional Resources