Augmented Reality-Based Explainable AI Strategies for Establishing Appropriate Reliance and Trust in Human-Robot TeamingDownload PDF

18 Feb 2022, 22:24 (modified: 01 Jul 2022, 18:53)VAM-HRI 2022Readers: Everyone
Keywords: Human-Robot Collaboration, Explainable AI, Augmented Reality, Reinforcement Learning, Counterfactual Explanation, Shared Mental Models, Plan Justification
Abstract: In human-robot teaming, live and effective communication is of critical importance for maintaining coordination and improving task fluency, especially in uncertain environments. Poor communication between teammates can foster doubt and misunderstanding, and lead to task failures. In previous work, we explored the idea of visually communicating notions of environmental uncertainty alongside robot-generated suggestions through augmented reality (AR) interfaces in a human-robot teaming setting. We introduced two complementary modalities of visual guidance: prescriptive guidance (visualizing recommended actions), and descriptive guidance (visualizing state space information to aid in decision-making), along with an algorithm to generate and utilize these modalities in partially-observable multi-agent collaborative tasks. We compared these modalities in a human subjects study, where we showed the ability of this combined guidance to improve trust, interpretability, performance, and human teammate independence. In this new work, we synthesize key takeaways from that study, leveraging them to describe remaining open challenges for live communication for human-robot teaming under uncertainty, and propose a set of approaches to address them via a collection of explainable AI techniques such as visual counterfactual explanations, predictable and explicable planning, and robot-generated justifications.
2 Replies