Watch-And-Help: A Challenge for Social Perception and Human-AI CollaborationDownload PDF

28 Sept 2020, 15:51 (modified: 10 Feb 2022, 11:48)ICLR 2021 SpotlightReaders: Everyone
Keywords: social perception, human-AI collaboration, theory of mind, multi-agent platform, virtual environment
Abstract: In this paper, we introduce Watch-And-Help (WAH), a challenge for testing social intelligence in agents. In WAH, an AI agent needs to help a human-like agent perform a complex household task efficiently. To succeed, the AI agent needs to i) understand the underlying goal of the task by watching a single demonstration of the human-like agent performing the same task (social perception), and ii) coordinate with the human-like agent to solve the task in an unseen environment as fast as possible (human-AI collaboration). For this challenge, we build VirtualHome-Social, a multi-agent household environment, and provide a benchmark including both planning and learning based baselines. We evaluate the performance of AI agents with the human-like agent as well as and with real humans using objective metrics and subjective user ratings. Experimental results demonstrate that our challenge and virtual environment enable a systematic evaluation on the important aspects of machine social intelligence at scale.
One-sentence Summary: We introduce Watch-And-Help (WAH), a challenge for testing social intelligence in agents.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) xavierpuigf/watch_and_help](https://github.com/xavierpuigf/watch_and_help)
Data: [HoME](https://paperswithcode.com/dataset/home)
12 Replies

Loading