Spring 2024
Spring 2023
Spring 2022
Spring 2022
Spring 2021
Spring 2020
Fall 2019
Summer 2019
Winter 2019
Fall 2018
Fall 2018
Winter 2018

Stanford Intelligent and Interactive Autonomous Systems Group (ILIAD) develops algorithms for AI agents that safely and reliably interact with people. Our mission is to develop theoretical foundations for human-robot and human-AI interaction. Our group is focused on: 1) formalizing interaction and developing new learning and control algorithms for interactive systems inspired by tools and techniques from game theory, cognitive science, optimization, and representation learning, and 2) developing practical robotics algorithms that enable robots to safely and seamlessly coordinate, collaborate, compete, or influence humans.

Recent News

Sep 28, 2024: Our 6 papers got accepted to the Conference on Robot Learning (CoRL) 2024:
- FlowRetrieval: Flow-Guided Data Retrieval for Few-Shot Imitation Learning
- Vocal Sandbox: Continual Learning and Adaptation for Situated Human-Robot Collaboration (Oral)
- So You Think You Can Scale Up Autonomous Robot Data Collection?
- RT-Sketch: Goal-Conditioned Imitation Learning from Hand-Drawn Sketches (Oral)
- Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning (Outstanding paper award (finalist), Oral)
- OpenVLA: An Open-Source Vision-Language-Action Model (Outstanding paper award (finalist))
May 13, 2024: Our 8 papers got accepted to the Robotics: Science and Systems (RSS), 2024:
- Efficient Data Collection for Robotic Manipulation via Compositional Generalization
- Imitation Bootstrapped Reinforcement Learning
- Explore until Confident: Efficient Exploration for Embodied Question Answering
- RT-H: Action Hierarchies Using Language
- Pushing the Limits of Cross-Embodiment Learning for Manipulation and Navigation
- DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
- Octo: An Open-Source Generalist Robot Policy
- Learning to Learn Faster from Human Feedback with Language Model Predictive Control
May 1, 2024: Our 2 papers got accepted to the International Conference on Machine Learning (ICML), 2024:
- Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models
- Chain of Code: Reasoning with a Language Model-Augmented Code Emulator
Feb 15, 2024: Our 5 papers got accepted to the International Conference on Robotics and Automation (ICRA) 2024:
- Physically Grounded Vision-Language Models for Robotic Manipulation
- Toward Grounded Commonsense Reasoning
- Distilling and Retrieving Generalizable Knowledge for Robot Manipulation via Language Corrections
- How to Prompt Your Robot: A Prompt Book for Manipulation Skills with Code as Policies
- Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Jan 30, 2024: Our paper titled "Contrastive Preference Learning: Learning from Human Feedback without RL" got accepted to the International Conference on Learning Representations (ICLR) 2024!
See All

Recent Talk

Dorsa's seminar talk on "Learning Representations for Interactive Robotics"