Supporting Explanations Within an Instruction Giving FrameworkDownload PDF

Jun 08, 2021 (edited Sep 08, 2021)XAIP 2021Readers: Everyone
  • Keywords: Explainable Planning, Instruction Giving Agent, Plan-Based agent, Human-Agent Interaction, Within Task Elicitation
  • TL;DR: Starting from a corpus of task based interaction dialogues, we examine the use of explanations and demonstrate how they are supported in an instruction giving framework
  • Abstract: As AI Planning has matured and become more applicable to real world scenarios, there has been an increased focus in explainable planning (XAIP), which focuses on making the planning model, process and resulting plan more explainable. In the context of a plan-based instruction giving agent, explainable planning is a vital ingredient in supporting agents to be capable of effective interaction, as explaining aspects relating to the plan, or model form natural parts of an interaction. As a starting point we have considered the analysis of a corpus of task based human human interactions. This analysis identifies transactions (roughly plan steps) as key components within the interaction, where parts of the interaction will largely focus on the specific step (e.g., instruction) under consideration. We have developed a new framework that exploits this structure, by organising the interactions into a series of loosely coupled transactions. In this framework explanations play an important part both at the transaction level (e.g., instruction clarifications) and at the task level (e.g., intention). We have developed a prototype system, which can support large scale interactions. Our results also indicate that our system can be used to elicit information from the user at execution time and use this information to select an appropriate plan. We show that this can lead to fewer explanations.
4 Replies