Abstract: Deep convolutional neural networks have made significant breakthroughs in medical image classification, under the assumption that training samples from all classes are simultaneously available. However, in real-world medical scenarios, there's a common need to continuously learn about new diseases, leading to the emerging field of class incremental learning (CIL) in the medical domain. Typically, CIL suffers from catastrophic forgetting when trained on new classes. This phenomenon is mainly caused by the imbalance between old and new classes, and it becomes even more challenging with imbalanced medical datasets. In this work, we introduce two simple yet effective plug-in methods to mitigate the adverse effects of the imbalance. First, we propose a CIL-balanced classification loss to mitigate the classifier bias toward majority classes via logit adjustment. Second, we propose a distribution margin loss that not only alleviates the inter-class overlap in embedding space but also enforces the intra-class compactness. We evaluate the effectiveness of our method with extensive experiments on three benchmark datasets (CCH5000, HAM10000, and EyePACS). The results demonstrate that our approach outperforms state-of-the-art methods.
Primary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: To summarize, the main contributions of this paper are:
1) To reduce the classifier bias towards new and majority classes, we propose a CIL-balanced classification loss that emphasizes rare ones via logit adjustment.
2) We introduce a novel distribution margin loss that can effectively separate the distributions of old and new classes to avoid ambiguities and realize the optimization of the intra-class compactness.
3) Extensive experiments demonstrate that our method can effectively address the issue of data imbalance with the state-of-the-art performance achieved on three benchmark datasets: CCH5000, HAM10000, and EyePACS.
Submission Number: 1641
Loading