Abstract: Medical artificial intelligence (AI) systems hold promise for transforming healthcare by supporting clinical decision-making in diagnostics and treatment. The effective deployment of medical AI requires trust among key stakeholders — including patients, providers, developers and regulators — which can be built by ensuring transparency in medical AI, including in its design, operation and outcomes. However, many AI systems function as ‘black boxes’, making it challenging for users to interpret and verify their inner workings. In this Review, we examine the current state of transparency in medical AI, from training data to model development and model deployment, identifying key challenges, risks and opportunities. We then explore a range of techniques that promote explainability, highlighting the importance of continual monitoring and system updates to ensure that AI systems remain reliable over time. Finally, we address the need to overcome barriers that inhibit the integration of transparency tools into clinical settings and review regulatory frameworks that prioritize transparency in emerging AI systems. Artificial intelligence (AI) models are being applied more often across a range of biomedical domains to support clinical decision-making and therapeutic strategies. This Review examines the transparency of medical AI systems, highlighting key approaches to increasing transparency in model design, operation and outcomes.
External IDs:doi:10.1038/s44222-025-00363-w
Loading