Human Models
Interactions
Societal Implications

Formalizing Human-AI Interactions

Shared representations (such as roles or intents) that capture the sufficient structure for collaboration.
As robots enter our lives, many important questions about the interaction between humans and robots arise. What are the equlibria that the agents reach to after repeated interactions? What are some of the emergent behaviors that are the results of repeated interactions? Can leading and following (or in other words teaching and learning) naturally emerge? Can the AI agent guide the human toward more desirable outcomes? Can the agents develop partner-specific conventions throughout the interaction? How do conventions adapt over time, i.e., how do partners adapt to each other over long-term interactions? How can repeated interactions influence trust or deception?

In practice, humans are able to seamlessly interact and adapt even in complex settings. Given humans' computational constraints (such as being bounded rational with access to limited memory or time), we believe the reason humans can easily interact with each other is that interactions, despite their apparent complexity, are inherently structured. Our goal is to discover such structures, and leverage them for better collaboration and influencing. We study interactions in the space of assistive teleoperation [ICRA 2020, RSS 2020, IROS 2020], dyadic interactions [CoRL 2019, IROS 2019], and autonomous driving and navigation [RSS 2016, IROS 2016, AURO 2018, RSS 2019].

Assistive Teleoperation

For almost one million American adults living with physical disabilities, picking up a bite of food or pouring a glass of water presents a significant challenge. Wheelchair-mounted robotic arms -- and other physically assistive devices -- hold the promise of increasing user autonomy, reducing reliance on caregivers, and improving quality of life. Unfortunately, the very dexterity that makes these robotic assistants useful also makes them hard for humans to control. Today's users must teleoperate their assistive robots throughout entire tasks. For instance, when users control an assistive robot for eating, they would need to carefully orchestrate the position and orientation of the end-effector to move a fork to the plate, spear a morsel of food, and then guide the food back towards their mouth. These challenges are often prohibitive: users living with disabilities have reported that they choose not to leverage their assistive robot when eating because of the associated difficulty. Our key insight is that controlling high-dimensional robots can become easier by learning and leveraging low-dimensional representations of actions, which enable users to convey their intentions, goals, and plans to the robot using simple, intuitive, and low-dimensional inputs.

Imagine that you are working with the assistive robot to grab food from your plate. Here we placed three marshmallows on a table in front of the user, and the person needs to make the robot grab one of these marshmallows using their joystick.

Importantly, the robot does not know which marshmellow the human wants! Ideally, the robot will make this task easier by learning a simple mapping between the person's inputs and their desired marshmallow.

Our work addresses challenges in assistive teleoperation and specifically assistive feeding by leveraging latent representations of actions to enable intuitive control of the robots, integrating shared autonomy and latent actions to enable precise manipulation for food acquision and transfer, and learning personalized controllers to enable faster adaptations to specific users.

Learning Latent Strategies

When teams of robots collaborate to complete a task, communication is often necessary. Like humans, robot teammates should implicitly communicate through theiractions: but interpreting our partner's actions is typically difficult, since a given action may have many different underlying reasons. We propose using low-dimensional representations of partner's actions (such as leading or following roles) to enable correctly interpreting the meaning behind the partner's actions.

We communicate these low-dimensional roles to learn from our partner's actions and collaborate on transporting rigid objects in a decentralized fashion. In multi-agent settings, we learn graph representations of emergent leading and following to coordinate between humans and robots. These low-dimensional representations can be directly learned through repeated interactions.

Interaction-Aware Control

Cars interact with each other in traffic.
Using control theory and game theory, we can control autonomous robots to coordinate or even influence other agents such as humans. We leverage learned representations or learned human models in our planning and control algorithms. Specifically, we explicitly model that robot actions can influence human responses. This enables us to plan for coordinating more effectively either in cooperation or competition settings, and further positively influence other agent's decision making.

Robots can infer humans internal state by carefully designing the interaction with them. Here the autonomous car nudges toward the destination lane to actively gather information about the driving style of the white car.

Taking this idea to the space of autonomous driving, we first learn human driving behavior through imitation learning or preference-based learning. Leveraging these models, we then can plan for influencing human-driven cars for better safety, efficiency, and coordination. Our work actively gathers information about the driving style of other vehicles to discover their policies and influence them toward more desirable strategies.

Incomplete List of Related Publications:
  • Mengxi Li, Dylan P. Losey, Jeannette Bohg, Dorsa Sadigh. Learning User-Preferred Mappings for Intuitive Robot Control. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2020. [PDF]
  • Hong Jun Jeon, Dylan Losey, Dorsa Sadigh. Shared Autonomy with Learned Latent Actions. Proceedings of Robotics: Science and Systems (RSS), July 2020. [PDF]
  • Dylan P. Losey, Krishnan Srinivasan, Ajay Mandlekar, Animesh Garg, Dorsa Sadigh. Controlling Assistive Robots with Learned Latent Actions. International Conference on Robotics and Automation (ICRA), May 2020. [PDF]
  • Dylan P. Losey, Dorsa Sadigh. Robots that Take Advantage of Human Trust. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), November 2019. [PDF]
  • Dylan P. Losey*, Mengxi Li*, Jeannette Bohg, Dorsa Sadigh. Learning from My Partner's Actions: Roles in Decentralized Robot Teams. Proceedings of the 3rd Conference on Robot Learning (CoRL), October 2019. [PDF]
  • Minae Kwon*, Mengxi Li*, Alexandre Bucquet, Dorsa Sadigh. Influencing Leading and Following in Human-Robot Teams. Proceedings of Robotics: Science and Systems (RSS), June 2019. [PDF]
  • Jiaming Song, Hongyu Ren, Dorsa Sadigh, Stefano Ermon. Multi-Agent Generative Adversarial Imitation Learning. Conference on Neural Information Processing Systems (NeurIPS), December 2018. [PDF]
  • Dorsa Sadigh, Nick Landolfi, S. Shankar Sastry, Sanjit A. Seshia, Anca D. Dragan. Planning for Cars that Coordinate with People: Leveraging Effects on Human Actions for Planning and Active Information Gathering over Human Internal State. Autonomous Robots (AURO), October 2018. [PDF]
  • Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, Anca Dragan. Information Gathering Actions over Human Internal State. Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2016. [PDF]
  • Dorsa Sadigh, S. Shankar Sastry, Sanjit A. Seshia, Anca D. Dragan. Planning for Autonomous Cars that Leverage Effects on Human Actions. Proceedings of Robotics: Science and Systems (RSS), June 2016. [PDF]