Position: Ophthalmology as a Lens for Trustworthy GenAI in Europe---Uncertainty-Aware AI under the EU AI Act
Keywords: Uncertainty quantification, Generative AI, Ophthalmology, EU AI Act, Medical Device Regulation, Human oversight, Trustworthy AI, Calibration, Out-of-distribution detection, Health Technology Assessment
TL;DR: We argue that uncertainty-aware GenAI in ophthalmology can help transform Europe’s strict AI Act and MDR requirements from barriers into enablers of trustworthy medical AI.
Abstract: Artificial intelligence is rapidly advancing in ophthalmology, offering data-driven models for early detection, triage, and decision support. In Europe, adoption is shaped by an unusually strict regulatory landscape defined by the parallel application of the Medical Device Regulation (MDR) and the Artificial Intelligence Act (AI Act). The MDR emphasizes safety and performance, while the AI Act imposes horizontal obligations for high-risk AI systems, including transparency, robustness, and human oversight. Although adopted in 2024, the AI Act’s provisions will phase in through 2026–27, creating a moving target for compliance. This dual environment risks raising entry barriers and slowing innovation but also opens opportunities: uncertainty-aware methods, such as calibrated confidence estimates, out-of-distribution detection, and risk communication, can directly address transparency and reliability requirements while aligning with clinical trust needs. Using ophthalmology as a lens, we argue that uncertainty quantification can turn Europe’s regulatory strictness from bottleneck into enabler for trustworthy GenAI in healthcare.
Submission Number: 152
Loading