Abstract: Highlights•Propose a new scalable incremental learning framework with minimal parameter growth.•Design two channels for stability and plasticity, and enable their collaboration.•Add a distillation loss to enhance knowledge generalization in intermediate layers.•The proposed method outperforms classical methods across datasets and settings.
Loading