Learning Human Perception Dynamics for Informative Robot Communication

Published: 07 May 2025, Last Modified: 07 May 2025ICRA Workshop Human-Centered Robot LearningEveryoneRevisionsBibTeXCC BY 4.0
Workshop Statement: Our work develops a planning approach for human-robot cooperative navigation that balances autonomous action and informative communication that uses a learned human perception model. This directly supports the workshop’s focus on data-driven, human-centered embodied AI by integrating human cognitive responses into robot decision-making for unified communication-action planning.
Keywords: Human-Robot Interaction, Online Planning, Human Perception Modeling
TL;DR: Learning a human perception dynamics model to facilitate human-robot cooperative navigation.
Abstract: Human-robot cooperative navigation is challenging in environments with incomplete information. We introduce CoNav-Maze, a simulated environment where a robot navigates using local perception while a human operator provides guidance based on an inaccurate map. The robot can share its camera views to improve the operator’s understanding of the environment. To enable efficient human-robot cooperation, we propose Information Gain Monte Carlo Tree Search (IG-MCTS), an online planning algorithm that balances autonomous movements and informative communication. Central to IG-MCTS is a fully convolutional neural human perception dynamics model that estimates how humans distill information from robot communications. We collect a dataset through a crowdsourced mapping task in CoNav-Maze to train this model before the cooperative navigation experiments. User studies show that IG-MCTS outperforms teleoperation and instruction-following baselines, achieving comparable task performance with significantly less communication and lower human cognitive load, as evidenced by eye-tracking metrics.
Supplementary Material: zip
Submission Number: 28
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview