Dual attention-guided distillation for class incremental semantic segmentation

Published: 2025, Last Modified: 19 May 2025Appl. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Class Incremental Semantic Segmentation (CISS) aims at segmenting the incremental new classes without losing the ability on old classes. Currently, some CISS methods based on feature knowledge distillation suffer from the stability-plasticity dilemma, i.e., excessive knowledge distillation may impede models from learning new classes. Besides, distilling without emphasis fails to preserve old knowledge effectively. To address these issues, a more fine-grained and focused approach to knowledge transfer, named dual attention-guided distillation (DAGD), is proposed for the CISS task. This approach not only ensures that the inherited knowledge is distilled in a targeted manner but also allows the model to adapt and learn new knowledge more efficiently. DAGD model contains a channel attention-guided distillation module and a spatial attention-guided distillation module. The former distills channel-wise attention maps to improve the knowledge transfer of essential channels while accommodating new knowledge learning. The latter encodes a weight coefficient map to highlight important regions in the spatial dimension, which further decouples old knowledge retention and new knowledge entry. Furthermore, a dynamic temperature strategy is introduced to facilitate logit knowledge distillation, specifically sharpening the predictive distribution produced by the output of the old model, thus achieving more accurate knowledge transfer. Extensive experimental results on Pascal VOC 2012 and ADE20K datasets demonstrate that our method achieves competitive results.
Loading