Dual Supervised Contrastive Learning Based on Perturbation Uncertainty for Online Class Incremental Learning

Published: 01 Jan 2024, Last Modified: 17 Apr 2025ICPR (9) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: To keep learning knowledge from a data stream with changing distribution, continual learning has attracted lots of interests recently. Among its various settings, online class-incremental learning (OCIL) is more realistic and challenging since the data can be used only once. Currently, by employing a buffer to store a few old samples, replay-based methods have obtained huge success and dominated this area. Due to the single pass property of OCIL, how to retrieve high-valued samples from memory is very important. In most of the current works, the logits from the last fully connected layer are used to estimate the value of samples. However, the imbalance between the number of samples for old and new classes leads to a severe bias of the FC layer, which results in an inaccurate estimation. Moreover, this bias also brings about abrupt feature change. To address this problem, we propose a dual supervised contrastive learning method based on perturbation uncertainty. Specifically, we retrieve samples that have not been learned adequately based on perturbation uncertainty. Retraining such samples helps the model to learn robust features. Then, we combine two types of supervised contrastive loss to replace the cross-entropy loss, which further enhances the feature robustness and alleviates abrupt feature changes. Extensive experiments on three popular datasets demonstrate that our method surpasses several recently published works.
Loading