Interaction of doctors with explainable RL decision support via behavioural readouts of eye-trackingDownload PDF

Published: 20 Jul 2023, Last Modified: 31 Aug 2023EWRL16Readers: Everyone
Keywords: Human-AI interaction, explainable RL (XRL), explainable AI (XAI), clinical decision support system (CDSS), real-world simulation
TL;DR: Reinforcement learning with explanations did not help doctors to reject unsafe clinical AI suggestions as per eye-tracking readouts
Abstract: Explainable reinforcement learning (XRL) is crucial for reinforcement learning (RL) algorithms within clinical decision support systems. However, most XRL evaluations have been conducted with non-expert users in toy settings. Despite the promise of RL in healthcare, deployment has been especially slow in part because of safety concerns which XRL might be able to attenuate. In our study, we observed doctors interacting with a clinical XRL in a high-fidelity simulated medication dosing scenario. Using eye-tracking technology, we analyzed these interactions across safe and unsafe XRL suggestions. We find that there cognitive attention devoted to XRL during unsafe scenarios is similar to during safe scenarios (despite doctors more frequently rejecting unsafe XRL suggestions). This suggests that XRL does not lie in the causal pathway for doctors to reject unsafe AI advice.
1 Reply

Loading