Intuitive human-robot collaboration requires adaptive modalities for humans and robots to communicate and learn from each other. For diverse teams of humans and robots to naturally collaborate on novel tasks, robots must be able to model roles for themselves and other team members, anticipate how team members may perceive their actions, and communicate back to team members to continuously promote inclusive team cohesion toward achieving a shared goal.
Here, we describe a set of tasks for studying mixed multi-human and multi-robot teams with heterogenous roles to achieve joint goals through both voice and gestural interactions. Based around the cooperative game TEAM3, we specify a series of dyadic and triadic human-robot collaboration tasks that require both verbal and nonverbal communication to effectively accomplish. Task materials are inexpensive and provide methods for studying a diverse set of challenges associated with human-robot communication, learning, and perspective-taking.
Dr. Joseph Salisbury is a neuroscientist (Ph.D., Brandeis University, 2013) and software developer whose current research focuses on human-computer interaction, human-robot interaction, and applications of large language models.
LinkedINThe above listed authors are current or former employees of Riverside Research. Authors affiliated with other institutions are listed on the full paper. It is the responsibility of the author to list material disclosures in each paper, where applicable – they are not listed here. This academic papers directory is published in accordance with federal guidance to make public and available academic research funded by the federal government.