Robot Learning Reading Group

Starting this robot learning reading group with New Systems, and potentially going to host more after.

Similar events:

Questions we should answer as we read these papers:

  • What are the authors trying to do? Articulate their objectives.
  • How was it done prior to their work, and what were the limits of current practice?
  • What is new in their approach, and why do they think it will be successful?
  • What are the mid-term and final “exams” to check for success? (i.e., How is the method evaluated?)
  • What are limitations that the author mentions (any that they omit)?

About

​Meetup to discuss state-of-the-art research on robot learning, similar to the Toronto ML/Systems Reading Group and Vector Institute’s Machine Learning Lunches- list of topics & articles below - all are welcome! 🎉

Annotate papers through Alphaxiv:

Week ? - Out of distribution

One of the key challenges is robustness. When robots get out of distribution. And how it recovers. I would say this is the key challenge right now.

offline RL enables learning from suboptimal demonstrations, not necessarily being able to be robust to edge cases.

Week ? - Cross-Embodiment

Workshops:

Yang, et al., 2024. Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation.

Week ? - Egocentric Papers

Kareer, et al. 2024. EgoMimic: Scaling Imitation Learning via Egocentric Video. Athalye, et al. 2025. From Pixels to Predicates: Learning Symbolic World Models via Pretrained Vision-Language Models. Doshi, et al. 2024. Scaling Cross-Embodied Learning: One Policy for Manipulation, Navigation, Locomotion and Aviation. Patel, et al. 2024. GET-Zero: Graph Embodiment Transformer for Zero-shot Embodiment Generalization. Niu, et al. 2025. Pre-training Auto-regressive Robotic Models with 4D Representations.

Week ? - Shiza top paper

Week ? - Learning from Human Videos

Liu, et al. 2025. EgoZero: Robot Learning from Smart Glasses. Ye, et al. 2025. MM-Ego: Towards Building Egocentric Multimodal LLMs for Video QA.

Week 3 - Real2Sim + Sim2Real / Whole-Body Control

Kalaria, et al., 2025. DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion by Ayush Garg

Additional Reading List:

Background knowledge:

Extra articles:

Week 2 - World Models

Topic - World Models

Additional Reading List:

Workshops:

Week 1 - Robot Foundation Models

Additional Reading List

Additional Resources