In humble defense of unexplainable black box prediction models in healthcare

Florien S. van Royen, Hilde J.P. Weerts, Anne A.H. de Hond, Geert Jan Geersing, Frans H. Rutten, Karel G.M. Moons, Maarten van Smeden

Published: 01 Jan 2026, Last Modified: 15 Dec 2025Journal of Clinical EpidemiologyEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The increasing complexity of prediction models for healthcare purposes — whether developed with or without artificial intelligence (AI) techniques — drives the urge to open complex “black box” models using eXplainable AI (XAI) techniques. In this paper, we argue that XAI may not necessarily provide insights relevant to decision-making in the medical setting and can lead to misplaced trust and misinterpretation of the model's usability. An important limitation of XAI is the difficulty in avoiding causal interpretation, which may result in confirmation bias or false dismissal of the model when explanations conflict with clinical knowledge. Rather than expecting XAI to generate trust in black box prediction models to patients and healthcare providers, trust should be grounded in rigorous prediction model validations and model impact studies assessing the model's effectiveness on medical shared decision-making. In this paper, we therefore humbly defend the “unexplainable” prediction models in healthcare.
Loading