Emergent Behaviors in Human-Robot Systems

RSS 2020 Workshop
July 12, 2020
Oregon State University at Corvallis, Oregon, USA Virtual Event

Contact Information
  ebiyik [at] stanford [dot] edu


This workshop will focus on emerging behaviors in human-robot systems. Hence, the intended audience includes, but is not limited to, the researchers who study human-robot interaction and multi-agent systems, with applications such as personal robots, assistive robots, and self-driving cars. The workshop will bring attention to several aspects regarding emergent behaviors: how they can be predicted, how they emerge and how we can best benefit from them. Bringing researchers together from various fields, the workshop will consist of interleaved talks between the speakers from different fields to encourage multidisciplinary discussion and interaction. Discussion and interaction will be further encouraged through break-out sessions, including a panel and debate, as well as the morning and afternoon coffee breaks.

This workshop may also be interesting to the learning community, since many recent developments in multi-agent reinforcement learning focus on how communication and coordination can autonomously emerge in robot teams. Besides, exploring how human agents develop conventions with robots has often served as an inspiration for learning algorithms. Hence, the workshop includes speakers focused on multi-agent learning.


Front Figure

Robots are increasingly becoming members of our everyday community. Self-driving cars, robot teams, and social and assistive robots operate alongside human end-users to carry out various tasks. Similar to the conventions between humans, which are the low dimensional representations that capture the interaction and can change over time, emergent behaviors form as a result of repeated long-term interactions in multi-agent systems or human-robot teams. Unfortunately, these emergent behaviors are still not well understood. For instance, the robotics community has observed that many different, and often surprising, robot behaviors can emerge when robots are equipped with artificial intelligence and machine learning techniques. While some of these emergent behaviors are simply undesirable side-effects of misspecified objectives, many of them significantly contribute to the performance in the task and influence other agents in the environment. These behaviors can further lead to developing conventions and adaptation of other agents, who are possibly humans, by encouraging them to approach the task differently.

Goal. We want to investigate how complex and/or unexpected robot behaviors emerge in human-robot systems, and to understand how we can minimize their risks and maximize their benefits. This workshop promotes a discussion on

Speakers (PDF Version)

Brenna Argall

Northwestern University

Anca Dragan

University of California, Berkeley

Judith Fan

University of California, San Diego

Jakob Foerster

Facebook AI & University of Toronto

Robert D. Hawkins

Princeton University

Maja Matarić

University of Southern California

Negar Mehr

Stanford University & University of Illinois Urbana-Champaign

Igor Mordatch

Google Brain

Harold Soh

National University of Singapore


30-minute talks by the invited speakers are available on YouTube and linked below.
All times below are in Pacific Time (PT).

09:15 AM - 09:30 AM RSS-wide Virtual Socializing Session
09:30 AM - 10:30 AM Panel (Speakers: Brenna Argall, Anca Dragan, Judith Fan, Jakob Foerster, Robert D. Hawkins, Maja Matarić, Igor Mordatch)
10:30 AM - 11:00 AM Spotlight Talks
  • Mitigating Undesirable Emergent Behavior Arising Between Driver and Semi-automated Vehicle [abstract]
    Timo Melman*, Niek Beckers*, David Abbink
  • On the Critical Role of Conventions in Adaptive Human-AI Collaboration [abstract]
    Andy Shih, Arjun Sawhney, Dorsa Sadigh
    Humans can quickly adapt to new partners in collaborative tasks (e.g. playing basketball), because they understand which fundamental skills of the task (e.g. how to dribble, how to shoot) carry over across new partners. Thus, they can effectively develop a convention with a new partner (e.g. pointing down signals bounce pass, pointing up signals lob pass), without being distracted by the full complexity of the task. To collaborate seamlessly with humans, AI agents should develop and adapt to conventions for different human partners. While many previous works have acknowledged the importance of learning conventions for human-AI collaboration, current approaches do not distinguish between skills intrinsic to the task and convention information specific to a partner. In this work, we formally define conventions as shared representations between partners that can evolve through repeated interactions. We propose a framework that teases apart rule-dependent representation from a low-dimensional convention-dependent representation in a principled way. Furthermore, we characterize the importance of conventions in collaborative tasks, and learn conventions to quickly adapt to new partners without re-learning the full complexities of the task. Finally, we study how humans adapt and respond to different conventions and partners. Our human-subject studies suggest humans adapt faster when AI agents use explainable conventions and are unable to adapt to convoluted or unexplainable conventions.
  • Emergent Correlated Equilibrium through Synchronized Exploration [abstract]
    Mark Beliaev*, Woodrow Z. Wang*, Daniel A. Lazar, Erdem Bıyık, Dorsa Sadigh, Ramtin Pedarsani
  • Towards Emerging Nonverbal Communication Protocols for Multi-Robot Populations [abstract]
    Kalesha Bullard, Jakob Foerster, Douwe Kiela, Joelle Pineau, Franziska Meier
  • Coordinating on Shared Procedural Abstractions for Physical Assembly [abstract]
    Will McCarthy, Cameron Holdaway, Robert Hawkins, Judith Fan

    Many real-world tasks require collaboration between multiple autonomous agents. The current study investigates mechanisms that enable human teams to work together more effectively in a physical reasoning task that requires coordinated decision making across an extended interaction.

    We paired human participants in an online environment: one participant was assigned the role of Architect, and the other the role of Builder. On each trial, the Architect viewed a scene containing two block towers, consisting of four blocks each, but was unable to place blocks themselves; conversely, the Builder was given a fixed inventory of blocks, but was unable to see the target scene. The goal of the Architect was to issue instructions in natural language to the Builder, whose goal it was to reconstruct the target scene. Each scene was presented four times, interleaved among other scenes, allowing us to examine behavioral signatures of the emergence of shared procedural abstractions across repeated collaboration.

    Specifically, we hypothesized that the instructions provided by the Architect would become more abstract over time, reflecting the accumulation of interaction history. That is, initial instructions would provide extensive details about simple actions (e.g., “Place a red domino three spaces to the left of the wall.”), while later instructions would shift to more concise language encoding complex sequences of actions (e.g., “Build a C on the right.”). Among N=14 pairs of participants that met our minimum performance threshold, we found that Architects provided more concise instructions across repetitions of the same tower pairs (first: 59.7 words; final: 23.0 words; b = -11.8, t = -10.5, p< 0.001), concurrent with improvement in reconstruction accuracy (first: 89.6% overlap, final: 98.9%; b = 3.17, t = 3.99, p < 0.001). These results suggest that participants were able to rapidly coordinate on linguistic conventions for parsing each assembly task into reusable subgoals.

    In future work, we plan to further analyze the content and structure of these linguistic conventions (e.g., emergence of unique tokens for towers and scenes), and develop autonomous artificial agents who can emulate human behavior in the Architect and Builder roles. In the long term, such studies may shed light on how goal-relevant abstractions emerge from interaction between intelligent, autonomous agents.

  • Robot Learning Collaborative Manipulation Plans from YouTube Cooking Videos [abstract]
    Hejia Zhang, Stefanos Nikolaidis
11:15 AM - 11:30 AM RSS-wide Virtual Socializing Session


Recordings are available on the YouTube playlist: https://www.youtube.com/playlist?list=PLALgrVO1YLmsXdaRKWtABqONtA81J0t2O.
You can also use the interface below to browse and watch the recordings.

Spotlight Talks
Anca Dragan
Judith Fan
Jakob Foerster
Robert D. Hawkins
Negar Mehr
Harold Soh