Interaction-Aware Planning and Control

As robots enter our lives, many important questions about the interaction between humans and robots arise. We are particularly interested in how robots can influence the behaviors of humans. For that, we model the interaction as a multi-agent game which can be cooperative (tasks that involve human-robot collaboration) or competitive (e.g. traffic). We investigate the policies and implications of these games to analyze the efficiency and the safety of human-robot interactions.

Interaction-Aware Control

Cars interact with each other in traffic.
After putting the interactions in real-life scenarios as multi-agent games, we formulate and solve the control problems. For that, we use tools from control theory, game theory, and artificial intellgience. By taking the human behaviors into account, our solutions lead to high-performance robot policies. In this way, we are able to improve both the performance of the robots and the overall system. We validate our models under various environments and tasks, e.g. autonomous driving.

Active Information Gathering over Humans' Internal State

Robots can infer humans internal state by carefully designing the interaction with them.
To enable efficient human-robot collaboration, understanding human teaming patterns is crucial. In ILIAD, we learn models that capture latent patterns in multi-agent teams and leverage them to create collaborative robot policies. We aim to enable robot teammates to influence human teammates in changing environments, without having to explicitly decide leading and following roles a priori. We validate our frameworks in complex real-life scenarios.

Multi-agent Interactions

Decentralized planning is crucial in many multi-agent scenarios.
In many cases, multiple robots might need to autonomously operate in a single environment. We develop learning and control algorithms to achieve high performances and/or imitate complex human behaviors for multi-agent cooperative or competitive environments. We design our algorithms such that robots stay safe and perform efficiently in terms of what they are optimizing for, even when the problem is naturally decentralized.


Incomplete List of Related Publications:
  • Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, Anca D. Dragan. Planning for Autonomous Cars that Leverage Effects on Human Actions. Proceedings of Robotics: Science and Systems (RSS), June 2016. [PDF]
  • Dorsa Sadigh, Nick Landolfi, S. Shankar Sastry, Sanjit A. Seshia, Anca D. Dragan. Planning for Cars that Coordinate with People: Leveraging Effects on Human Actions for Planning and Active Information Gathering over Human Internal State. Autonomous Robots (AURO), October 2018. [PDF]
  • Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, Anca Dragan. Information Gathering Actions over Human Internal State. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016. [PDF]
  • Jiaming Song, Hongyu Ren, Dorsa Sadigh, Stefano Ermon. Multi-Agent Generative Adversarial Imitation Learning. Conference on Neural Information Processing Systems (NeurIPS), December 2018. [PDF]
  • Minae Kwon*, Mengxi Li*, Alexandre Bucquet, Dorsa Sadigh. Influencing Leading and Following in Human-Robot Teams. Proceedings of Robotics: Science and Systems (RSS), June 2019. [PDF]