Explainable Guidance and Justification for Mental Model Alignment in Human-Robot Teams

Published: 01 Jan 2024, Last Modified: 17 Jun 2025HRI (Companion) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: There is potential for humans and autonomous robots to perform tasks collaboratively as teammates, achieving greater performance than either could on their own. Productive teamwork, however, requires a great deal of coordination, with human and robot agents maintaining well-aligned mental models regarding the shared task and each agent's role within it. Achieving this requires live and effective communication, especially as plans change due to shifts in environment knowledge. Our work leverages augmented reality and natural language interfaces to recommend policies to human teammates, explain the rationale of those policies, and justify during times of mismatched expectation, facilitating plan synchronization in partially observable, collaborative human-robot domains.
Loading