Provenance as a Substrate for Human Sensemaking and Explanation of Machine Collaborators

Published: 01 Jan 2021, Last Modified: 15 Jun 2024SMC 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Building and evaluating explainable Artificial Intelligence (AI) systems that accommodate human cognition remains a challenge for Human-Computer Interaction (HCI) and the need for practical solutions increases with our reliability on machines to extract, classify, and process information. Recent work has proposed triggers and metrics for explainable AI based on human mental models and psychological explanation quality. We complement this previous work by (1) extending and supporting these triggers and metrics with existing directives for information integrity, transparency, and rigor, (2) outlining a provenance-based framework for recording human-machine collaboration, and (3) demonstrating that a provenance-based approach address many of these explainable AI triggers and metrics. We show that provenance-based analyses help address questions of foundations, alternatives, necessity vs. sufficiency, sensitivity (e.g., what-if analyses), impact, and rationale, and we provide concrete evidence using an implemented human-machine analytic workspace. We outline ways to empirically measure the ability of these additional interpretation strategies to improve human understanding.
Loading