Abstract: Task-incremental learning methods that adopt knowledge distillation face two significant challenges: confidence bias and knowledge loss. These challenges make it difficult to effectively balance the stability and plasticity of the network in the incremental learning process. In this article, we propose double confidence calibration focused distillation (DCCFD) to address these challenges. We introduce intratask and intertask confidence calibration (ECC) modules that can mitigate network overconfidence during incremental learning and reduce the degree of feature representation bias. We also propose a focused distillation (FD) module that can alleviate the problem of knowledge loss during the task increment process, improving model stability without reducing plasticity. Experimental results on the CIFAR-100, TinyImageNet, and CORE-50 datasets demonstrate the effectiveness of our method, with performance that matches or exceeds the state of the art. Furthermore, our method can be used as a plug-and-play module to consistently improve class-incremental learning methods.
Loading