Evaluating the impact of explainable RL on physician decision-making in high-fidelity simulations: insights from eye-tracking metrics
Keywords: Reinforcement laerning, interpretability, real-world, eye-tracking
TL;DR: We study the interaction between clinicians and XRL in a physical simulation suite, taking conclusions from eye=tracking metrics.
Abstract: Explainable reinforcement learning (XRL) is crucial for reinforcement learning (RL) algorithms within clinical decision support systems. However, most XRL evaluations have been conducted with non-expert users in toy settings. Despite the promise of RL in healthcare, deployment has been especially slow in part because of safety concerns which XRL might be able to attenuate. In our study, we observed doctors interacting with a clinical XRL in a high-fidelity simulated medication dosing scenario. Using eye-tracking technology, we analyzed these interactions across safe and unsafe XRL suggestions. We find that there cognitive attention devoted to XRL during unsafe scenarios is similar to during safe scenarios (despite doctors more frequently rejecting unsafe XRL suggestions). This suggests that XRL does not lie in the causal pathway for doctors to reject unsafe AI advice.
Submission Number: 6
Loading