Eye-tracking of clinician behaviour with explainable AI decision support: a high-fidelity simulation study

Published: 20 Jun 2023, Last Modified: 19 Jul 2023IMLH 2023 OralEveryoneRevisionsBibTeX
Keywords: Human-AI interaction, explainable AI (XAI), clinical decision support system (CDSS), real world simulation, ICML
TL;DR: Explainable AI does not help doctors to reject unsafe AI suggestions in a high-fidelity simulation with eye-tracking
Abstract: Explainable AI (XAI) is seen as important for AI-driven clinical decision support tools but most XAI has been evaluated on non-expert populations for proxy tasks and in low-fidelity settings. The rise of generative AI and the potential safety risk of hallucinatory AI suggestions causing patient harm has once again highlighted the question of whether XAI can act as a safety mitigation mechanism. We studied intensive care doctors in a high-fidelity simulation suite with eye-tracking glasses on a prescription dosing task to better understand their interaction dynamics with XAI (for both intentionally safe and unsafe (i.e. hallucinatory) AI suggestions). We show that it is feasible to perform eye-tracking and that the attention devoted to any of 4 types of XAI does not differ between safe and unsafe AI suggestions. This calls into question the utility of XAI as a mitigation against patient harm from clinicians erroneously following poor quality AI advice.
Submission Number: 33
Loading