Abstract: Diabetic retinopathy (DR) is a prevalent complication of diabetes that can result in vision impairment and blindness, making accurate DR grading essential for early diagnosis and treatment. Most existing DR grading methods assume that the training and test images share the same distribution. However, the generalization performance on unseen target domains has not been adequately addressed. In this paper, we observe that images from the same domain tend to cluster together in the feature space, rather than images of the same grade. This is largely due to the fact that when the representation of lesions is influenced by style variations, the network tends to remember features of different image domains through separate channels. This phenomenon significantly impacts the generalization capability of deep learning models. To address this issue, we propose a global-aware channel similarity to reduce the influence of lesion position and size when measuring the distance in the feature space. This is further utilized in a grade-aware contrastive learning approach, which guides the learning of domain-invariant features by mapping images of the same grade into a compact subspace. Additionally, we develop a multi-scale de-stylization method to explicitly eliminate style information from the features, which also compels the model to exploit diverse representations of the lesion. Extensive experiments on multiple DR grading datasets show the state-of-the-art generalization ability of the proposed method.
Loading