Self-Correlation Network With Triple Contrastive Learning for Hyperspectral Image Classification With Noisy Labels

Published: 01 Jan 2025, Last Modified: 09 Nov 2025IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Data quality is essential for training deep learning models, and recently, the challenge of noisy labels in hyperspectral image (HSI) classification has attracted considerable attention. However, current deep learning approaches typically employ conventional convolution methods that treat all spatial frequency components uniformly, neglecting the exploration of feature-dependent knowledge, significantly affecting learning with noisy labels. Consequently, these methods perform poorly in scenarios with a high noisy-to-clean sample ratio. To address the above drawback, we propose an end-to-end self-correlation framework with triple contrastive learning (SCTCL) for HSI classification with noisy labels. Our SCTCL harnesses maximizing the similarities of the positive pairs of the HSI features by defining cluster-, instance-, and structure-level learnings representing a contrastive loss. First, we construct HSI data pairs through weak and strong data augmentations. Then, we propose a cross-convolutional with a self-correlation network (ConvSCNet) module to extract spatial-spectral feature representation from all augmented samples. Subsequently, we employ instance- and cluster-level contrastive learnings to project the feature matrix in row and column spaces to minimize negative and maximize positive pairs. Furthermore, we incorporate structure-level representation learning to address inconsistencies across different projections. By doing so, we mitigate the classifier from overfitting to noisy labels. We conducted experiments on five publicly available HSI datasets with various noisy-to-clean sample ratios. We consider both symmetric and asymmetric noises. The classification results prove that the proposed SCTCL performs excellently in training HSI with a limited clean sample compared to the state-of-the-art methods.
Loading