Key Facts

Live Session Time: 09.30am - 11.15am (PT),  Full Web Site: Click Here,
Prerecorded Talks: Link TBA,  Contact Information: ebiyik [at] stanford [dot] edu

Speakers (PDF Version)

Brenna Argall

Northwestern University

Anca Dragan

University of California, Berkeley

Judith Fan

University of California, San Diego

Jakob Foerster

Facebook AI & University of Toronto

Robert D. Hawkins

Princeton University

Maja Matarić

University of Southern California

Negar Mehr

Stanford University & University of Illinois Urbana-Champaign

Igor Mordatch

Google Brain

Harold Soh

National University of Singapore

Tentative Schedule

30-minute talks by the invited speakers will be made available on YouTube and be linked on the web site before the workshop.
All times below are in Pacific Time (PT).

09:15 AM - 09:30 AM RSS-wide Virtual Socializing Session
09:30 AM - 10:30 AM Panel (Speakers: Brenna Argall, Anca Dragan, Judith Fan, Jakob Foerster, Robert D. Hawkins, Maja Matarić, Igor Mordatch)
10:30 AM - 11:00 AM Spotlight Talks
  • Mitigating Undesirable Emergent Behavior Arising Between Driver and Semi-automated Vehicle [abstract]
    Timo Melman*, Niek Beckers*, David Abbink
  • On the Critical Role of Conventions in Adaptive Human-AI Collaboration [abstract]
    Andy Shih, Arjun Sawhney, Dorsa Sadigh
  • Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Thus, they can effectively develop a convention with a new partner (e.g. pointing down signals bounce pass, pointing up signals lob pass), without being distracted by the full complexity of the task. To collaborate seamlessly with humans, AI agents should develop and adapt to conventions for different human partners. While many previous works have acknowledged the importance of learning conventions for human-AI collaboration, current approaches do not distinguish between skills intrinsic to the task and convention information specific to a partner. In this work, we formally define conventions as shared representations between partners that can evolve through repeated interactions. We propose a framework that teases apart \emph{rule-dependent} representation from a low-dimensional \emph{convention-dependent} representation in a principled way. Furthermore, we characterize the importance of conventions in collaborative tasks, and learn conventions to quickly adapt to new partners without re-learning the full complexities of the task. Finally, we study how humans adapt and respond to different conventions and partners. Our human-subject studies suggest humans adapt faster when AI agents use explainable conventions and are unable to adapt to convoluted or unexplainable conventions.
  • Emergent Correlated Equilibrium through Synchronized Exploration [abstract]
    Mark Beliaev*, Woodrow Z. Wang*, Daniel A. Lazar, Erdem Bıyık, Dorsa Sadigh, Ramtin Pedarsani
  • Towards Emerging Nonverbal Communication Protocols for Multi-Robot Populations [abstract]
    Kalesha Bullard, Jakob Foerster, Douwe Kiela, Joelle Pineau, Franziska Meier
  • Coordinating on Shared Procedural Abstractions for Physical Assembly [abstract]
    Will McCarthy, Cameron Holdaway, Robert Hawkins, Judith Fan

    Many real-world tasks require collaboration between multiple autonomous agents. The current study investigates mechanisms that enable human teams to work together more effectively in a physical reasoning task that requires coordinated decision making across an extended interaction.

    We paired human participants in an online environment: one participant was assigned the role of Architect, and the other the role of Builder. On each trial, the Architect viewed a scene containing two block towers, consisting of four blocks each, but was unable to place blocks themselves; conversely, the Builder was given a fixed inventory of blocks, but was unable to see the target scene. The goal of the Architect was to issue instructions in natural language to the Builder, whose goal it was to reconstruct the target scene. Each scene was presented four times, interleaved among other scenes, allowing us to examine behavioral signatures of the emergence of shared procedural abstractions across repeated collaboration.

    Specifically, we hypothesized that the instructions provided by the Architect would become more abstract over time, reflecting the accumulation of interaction history. That is, initial instructions would provide extensive details about simple actions (e.g., “Place a red domino three spaces to the left of the wall.”), while later instructions would shift to more concise language encoding complex sequences of actions (e.g., “Build a C on the right.”). Among N=14 pairs of participants that met our minimum performance threshold, we found that Architects provided more concise instructions across repetitions of the same tower pairs (first: 59.7 words; final: 23.0 words; b = -11.8, t = -10.5, p< 0.001), concurrent with improvement in reconstruction accuracy (first: 89.6% overlap, final: 98.9%; b = 3.17, t = 3.99, p < 0.001). These results suggest that participants were able to rapidly coordinate on linguistic conventions for parsing each assembly task into reusable subgoals.

    In future work, we plan to further analyze the content and structure of these linguistic conventions (e.g., emergence of unique tokens for towers and scenes), and develop autonomous artificial agents who can emulate human behavior in the Architect and Builder roles. In the long term, such studies may shed light on how goal-relevant abstractions emerge from interaction between intelligent, autonomous agents.

  • Robot Learning Collaborative Manipulation Plans from YouTube Cooking Videos [abstract]
    Hejia Zhang, Stefanos Nikolaidis
11:15 AM - 11:30 AM RSS-wide Virtual Socializing Session

Sli.do for Live Panel Q&A

Organizers