Diversified and Structure-Realistic Fundus Image Synthesis for Diabetic Retinopathy Lesion Segmentation
Abstract: Automated diabetic retinopathy (DR) lesion segmentation aids in improving the efficiency of DR detection. However, obtaining lesion annotations for model training heavily relies on domain expertise and is a labor-intensive process. In addition to classical methods for alleviating label scarcity issues, such as self-supervised and semi-supervised learning, with the rapid development of generative models, several studies have indicated that utilizing synthetic image-mask pairs as data augmentation is promising. Due to the insufficient labeled data available to train powerful generative models, however, the synthetic fundus data suffers from two drawbacks: 1) unrealistic anatomical structures, 2) limited lesion diversity. In this paper, we propose a novel framework to synthesize fundus with DR lesion masks under limited labels. To increase lesion variation, we designed a learnable module to generate anatomically plausible masks as the condition, rather than directly using lesion masks from the limited dataset. To reduce the difficulty of learning intricate structures, we avoid directly generating images solely from lesion mask conditions. Instead, we developed an inpainting strategy that enables the model to generate lesions only within the mask area based on easily accessible healthy fundus images. Subjective evaluations indicate that our approach can generate more realistic fundus images with lesions compared to other generative methods. The downstream lesion segmentation experiments demonstrate that our synthetic data resulted in the most improvement across multiple network architectures, surpassing state-of-the-art methods.
Loading