VizXP: A Visualization Framework for Conveying Explanations to Users in Model Reconciliation ProblemsDownload PDF

Published: 19 Jul 2021, Last Modified: 05 May 2023XAIP 2021Readers: Everyone
Keywords: Explainable Planning, Visualization, Model Reconciliation, User Study
TL;DR: We propose a framework for designing visualizations for explainable planning, and run a user study with a prototype created using that framework to compare against text based explanations
Abstract: Advancements in explanation generation for automated planning algorithms have moved us a step closer towards realizing the full potential of human-AI collaboration in real-world planning applications. Within this context, a framework called model reconciliation has gained a lot of traction, mostly due to its deep connection with a popular theory in human psychology, known as the theory of mind. Existing literature in this setting, however, has mostly been constrained to algorithmic contributions for generating explanations. To the best of our knowledge, there has been very little work on how to effectively convey such explanations to human users, a critical component in human-AI collaboration systems. In this paper, we set out to explore to what extent visualizations are an effective candidate for conveying explanations in a way that can be easily understood. Particularly, by drawing inspiration from work done in visualization systems for classical planning, we propose a visualization framework for visualizing explanations generated from model reconciliation algorithms. We demonstrate the efficacy of our proposed system in a comprehensive user study, where we compare our framework against a text-based baseline for two types of explanations -- domain-based and problem-based explanations. Results from the user study show that users, on average, understood explanations better when they are conveyed via our visualization system compared to when they are conveyed via a text-based baseline.
4 Replies

Loading