Uncertainty-aware Cycle Diffusion Model for Fair Glaucoma Diagnosis

28 Nov 2025 (modified: 15 Dec 2025)MIDL 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness Learning, Image Synthesis, Diffusion Models, ControlNet
TL;DR: Shape-controlled diffusion augments underrepresented data to improve fairness and diagnosis in healthcare AI.
Abstract: Fairness has become a critical ethical concern, particularly in AI-based healthcare applications. The unbalanced and insufficient data size leads to relatively lower diagnosis performance. Conversely, this harms the fairness of AI when applied to real-world scenarios. Generative models, like diffusion models, offer a promising solution by generating diverse synthetic data to support underrepresented groups. This improves fairness and performance while mitigating privacy risks. We propose a shape-controlled framework that incorporates demographic information into an end-to-end diffusion model, along with an automatic selection strategy to identify overconfidently misclassified samples. These challenging samples are then augmented via the generative model to enhance its classification performance. The strategy also removes potentially misleading "lower-quality" synthetic samples. Two ophthalmic experts validated the clinical relevance and plausibility of our synthetic images through random external examination. Our method outperforms state-of-the-art methods on the Harvard-FairVLMed dataset in both fairness and diagnosis accuracy. Our code is available at https://github.com/WANG-ZIHENG/CCG.
Primary Subject Area: Fairness and Bias
Secondary Subject Area: Generative Models
Registration Requirement: Yes
Reproducibility: https://github.com/WANG-ZIHENG/CCG
Visa & Travel: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 81
Loading