Heterogeneous Incremental Learning for Dense Prediction: Advancing Knowledge Retention via Self-Distillation
Keywords: incremental learning
Abstract: Incremental Learning (IL) aims to preserve knowledge acquired from previous tasks while incorporating knowledge from a sequence of new tasks. However, most prior work explores only streams of homogeneous tasks (*e.g.*, only classification tasks) and neglects the scenario of learning across heterogeneous tasks that possess different structures of outputs. In this work, we formalize this broader setting as heterogeneous incremental learning (HIL).
Departing from conventional IL, the task sequence of HIL spans different task types, and the learner needs to retain heterogeneous knowledge for different output space structures.
To instantiate the HIL, we focus on HIL in the context of dense prediction (HIL4DP), a more realistic and challenging scenario.
To this end, we propose the Heterogeneity-aware Incremental Self-Distillation (HISD) method, an exemplar-free approach that preserves previously gained heterogeneous knowledge by self-distillation incrementally.
HISD comprises two complementary components: a distribution-balanced loss to alleviate the global imbalance of prediction distribution and a salience-guided loss that concentrates learning on informative edge pixels extracted with the Sobel operator.
Extensive experiments demonstrate that the proposed HISD significantly outperforms existing IL baselines in this new scenario.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8902
Loading