Abstract: Current intelligent diagnosis systems struggle to continually learn to diagnose more and more diseases due to catastrophic forgetting of old knowledge when learning new knowledge. Although storing small old data for subsequent continual learning can effectively help alleviate the forgetting issue, the heavy data imbalance between old classes and to-be-learned new classes in classifier training often causes biased prediction towards the new classes just learned by the updated classifier. In this study, an outlier detection technique is novelly applied to train an additional expert classifier for new classes to help alleviate the class imbalance issue and discriminate the learned new classes from old classes during inference (instead of the training phase). Specially, the stored small data of old classes are considered as outliers during training the expert classifier, such that the output probability distributions from the expert classifier are expected to be obviously different between test data of the old classes and those of the new classes. Such difference between old classes and new classes can be used to fine-tune the original output from the updated classifier which is responsible for prediction of all learned (old and new) classes. During inference, a novel ensemble strategy is proposed to combine the predictions from the updated classifier, the expert classifier, and the previously learned old classifier. The proposed learning and inference framework can be easily combined with existing continual learning strategies. Empirical evaluations on three medical image datasets and one natural image dataset show that the proposed framework can effectively improve continual learning performance.
0 Replies
Loading