Bidirectional Model Reconciliation: Explanations in Human-Robot TeamsDownload PDF

Anonymous

16 Apr 2019 (modified: 05 May 2023)Submitted to XAIP 2019Readers: Everyone
Keywords: Human Aware AI, Human Robot Teaming, Explanations
Abstract: As the use of AI technology becomes ubiquitous in our day-to-day lives, the need for AI agents to be either explicable or provide explanations (when they are not explicable) becomes important. An array of research work done in the automated planning community explores a human-robot interaction setting where the robot makes a plan and the human observes it and is provided with a set of explanations when the robot's plan (or behavior) does not align with the human's expectation. In this paper, we model a co-habitation scenario and provide a general overview where the human may either be a supervisor or a teammate. First, we highlight the various models that come into play in such settings. Second, we pin-point explanation scenarios that arise due to the disparity between the human and the robot about the understanding of team models. In settings where the robot is assumed to know more about the model, explanations are just a one-way communication. On the contrary, when the human's model is more accurate than that of the robot, we show that a two-way interaction becomes necessary for explanations. Lastly, we discuss how some of the existing works for the case where the human is a supervisor can be adapted in some of the settings when the human is a teammate and talk about a few high-level ideas that might help in scenarios for which no solutions exist.
5 Replies

Loading