HyperKD: Lifelong Hyperspectral Image Classification With Cross-Spectral-Spatial Knowledge Distillation
Abstract: Hyperspectral image (HSI) classification models suffer from a phenomenon known as catastrophic forgetting, which refers to the sharp decline in performance on previously learned tasks after learning a new one when continuously acquiring new knowledge from a sequence of tasks. In recent years, some lifelong learning approaches have been proposed for HSI classification. Despite some progress, the challenge of catastrophic forgetting in lifelong learning remains significant and unresolved. In this article, we propose a novel lifelong learning framework for HSI classification, which is based on exemplar replay and cross-spectral–spatial feature knowledge distillation (KD), termed HyperKD. Specifically, the proposed framework incorporates a min-max cross-selection (MMCS) module tailored to HSI characteristics with a cross-spectral-spatial knowledge distillation (CSSKD) module. The MMCS module selects the most representative or diverse samples as exemplars from previous tasks for replay. Additionally, the CSSKD module not only transfers the prediction logit distribution from the previous network to the current network but also transfers the spectral-spatial feature distribution via cross-network KD, without directly assessing the similarity of feature distributions, thereby retaining more knowledge and mitigating forgetting. Through experiments conducted on a series of tasks, including the Pavia, Indian Pines, Salinas, and Houston datasets, our approach demonstrates superior performance compared to previous lifelong learning methods for HSI classification, effectively mitigating catastrophic forgetting. The code implementation of our approach will be publicly available at https://github.com/lzlsxs/hyperkd.
Loading