Towards the next generation explainable AI that promotes AI-human mutual understanding

Published: 27 Oct 2023, Last Modified: 23 Nov 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
TL;DR: We propose to equip XAI with theory of mind (ToM) capabilities so that it may present more informative explanations.
Abstract: Recent advances in deep learning AI has demanded better explanations on AI’s operations to enhance transparency of AI’s decisions, especially in critical systems such as self-driving car or medical diagnosis applications, to ensure safety, user trust and user satisfaction. However, current Explainable AI (XAI) solutions focus on using more AI to explain AI, without considering users’ mental processes. Here we use cognitive science theories and methodologies to develop a next-generation XAI framework that promotes human-AI mutual understanding, using computer vision AI models as examples due to its importance in critical systems. Specifically, we propose to equip XAI with an important cognitive capacity in human social interaction: theory of mind (ToM), i.e., the capacity to understand others’ behaviour by attributing mental states to them. We focus on two ToM abilities: (1) Inferring human strategy and performance (i.e., Machine’s ToM), and (2) Inferring human understanding of AI strategy and trust towards AI (i.e., to infer Human’s ToM). Computational modeling of human cognition and experimental psychology methods play an important role for XAI to develop these two ToM abilities to provide user-centered explanations through comparing users' strategy with AI’s strategy and estimating user’s current understanding of AI’s strategy, similar to real-life teachers. Enhanced human-AI mutual understanding can in turn lead to better adoption and trust of AI systems. This framework thus highlights the importance of cognitive science approaches to XAI.
Submission Track: Full Paper Track
Application Domain: None of the above / Not applicable
Clarify Domain: Cognitive Science
Survey Question 1: We propose that an effective XAI system should possess the theory-of-mind (ToM) ability to evaluate how humans solve the same problem as the AI system (i.e., ToM about humans’ cognitive models of the task), as well as the ToM ability to infer users’ understanding of AI’s operation (i.e., to infer human’s ToM about AI’s cognitive model). Using these ToM abilities, the XAI system can present the most informative explanations to update the human's current understanding of AI, as it relates to how the human solves the task. In doing so, the XAI can establish mutual understanding between the AI and the user.
Survey Question 2: As we are proposing a general framework, we cannot answer this question about specific limitations of a particular application.
Survey Question 3: GradCAM, RISE
Submission Number: 9
Loading