Abstract: Glaucoma is a leading cause of irreversible blindness, and early diagnosis is crucial for effective treatment. However, AI-assisted glaucoma diagnosis faces challenges in fairness and data scarcity, because AI model biases can lead to disparities across demographic groups. To address this, we propose GlaucoDiff, a diffusion-based generative model that synthesizes SLO images with precise control over the vertical cup-to-disc ratio. Unlike previous methods, GlaucoDiff enables bidirectional synthesis, generating both healthy and glaucomatous samples of varying severity, thus enhancing the dataset diversity. To ensure anatomical fidelity, GlaucoDiff leverages real fundus backgrounds while generating the optic nerve head regions. We also introduce a sample selection strategy that filters generated images based on the alignment agreement percentage, compared with target optic structures, ensuring the high-quality of the synthetic data. Experiments on two public ophthalmic datasets demonstrate that GlaucoDiff outperforms state-of-the-art approaches in both diagnosis and fairness measurement settings. Two independent ophthalmologists’ evaluations confirm the clinical relevance of the generated images, highlighting GlaucoDiff’s potential for improving AI-driven glaucoma diagnosis. Our code is available (https://github.com/WANG-ZIHENG/GlaucoDiff).
External IDs:dblp:conf/miccai/WangYCZWZTWZZM25
Loading