Leveraging Multiple Deep Experts for Online Class-incremental Learning

Published: 2025, Last Modified: 09 Nov 2025ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Online incremental learning aims to enable learning systems to continuously accumulate new knowledge from streaming data in a single-pass manner while preserving previously acquired information. This more realistic and challenging setting has gained increasing attention in recent years. The state-of-the-art methods treat each module of a model, from shallow to deep, as a separate sub-expert network and transfer all the shallow expert knowledge into the final deep expert network. Although this yields notable improvements, we argue that directly supervising shallow layers hampers their acquisition of task-invariant knowledge. Furthermore, explicitly designating the final expert as a student network to absorb knowledge from other experts lacks adaptability, considering that different experts may not excel uniformly across all tasks. To address the aforementioned limitations, we leverage the shallow layers of the model as a shared feature extractor, while the deeper layers form a set of experts capable of learning robust and diverse features. Moreover, to facilitate knowledge transfer between multiple experts, we introduce the LEEP score to assess the feature transferability of each expert on new tasks, thereby selecting the most suitable expert as the teacher network for the new task. Extensive experiments on two evaluation benchmarks verify the effectiveness of our method (e.g, up to 1.3% on Split CIFAR-100 and 2.5% on Split Tiny-ImageNet). Code is available at https://github.com/untitledunmastered1998/MDE-OIL.
Loading