Diverse Conventions for Human-AI Collaboration


  • Bidipta Sarkar
  • Andy Shih
  • Dorsa Sadigh




Abstract

Conventions are crucial for strong performance in cooperative multi-agent games, because they allow players to coordinate on a shared strategy without explicit communication. Unfortunately, standard multi-agent reinforcement learning techniques, such as self-play, converge to conventions that are arbitrary and non-diverse, leading to poor generalization when interacting with new partners. In this work, we present a technique for generating diverse conventions by (1) maximizing their rewards during self-play, while (2) minimizing their rewards when playing with previously discovered conventions (cross-play), stimulating conventions to be semantically different. To ensure that learned policies act in good faith despite the adversarial optimization of cross-play, we introduce mixed-play, where an initial state is randomly generated by sampling self-play and cross-play transitions and the player learns to maximize the self-play reward from this initial state. We analyze the benefits of our technique on various multi-agent collaborative games, including Overcooked, and find that our technique can adapt to the conventions of humans, surpassing human-level performance when paired with real users.

Human-AI Interaction Videos

In the following videos, a human controls the blue player while a trained AI controls the green player. Self-play and Statistical Diversity are baselines while XP and CoMeDi are our techniques.

CoMeDi (Ours)


Cross-Play Min (XP)


Statistical Diversity (ADAP)


Self-Play (SP)


Overcooked Interaction Demo

In the space below, we have a live demo of the trained agents in Overcooked. You can select the AI under the "Algo" dropdown along with the layout. Use the arrow keys to move the agent and the space bar to interact with the object that your player is facing.

Citation



The website template was borrowed from Jon Barron and RT-1