Abstract: Class incremental learning (CIL) is the key to achieving adaptive vision intelligence, and one of the main streams for CIL is network expansion (NE). However, state-ofthe-art (SOTA) methods usually suffer from feature diffusion, growing parameters, feature confusion, and classifier bias. In view of this, a novel dynamic structure dubbed as recurrent NE (RNE) is proposed by establishing connections among task experts. Specifically, the previous task experts transfer features sequentially through a shared module and the new task expert makes adjustments based on received features rather than reextracted ones, thereby focusing more on the key area and avoiding feature diffusion. Furthermore, the RNE is compressed by replacing additional task experts with lightened ones, in order to significantly reduce the number of parameters while keeping the performance almost unaltered. In addition, feature confusion is alleviated by a decoupled classifier and classifier bias is corrected by pseudo-feature generation. Extensive experiments on four widely adopted benchmark datasets, i.e., CIFAR -100, ImageNet-100, Food-101, and ImageNet-1K, have demonstrated that RNE achieves SOTA performance in both ordinary and challenging CIL settings.
Loading