Abstract: In recent years, hyperspectral image classification (HIC) has achieved remarkable success in remote sensing. Existing methods, however, depend critically on accurate annotations and, under complex spectral signatures and environmental noise, label errors inevitably degrade spectral–spatial representations and impair model generalization. Some existing methods address label noise by discarding noisy samples, but this sacrifices informative data and reduces the diversity of positive–negative pairs required for contrastive learning, thereby undermining feature discriminability. To address these challenges, we propose selective-relaxed contrastive learning (SRCL), a unified framework for robust spectral–spatial representation learning under noisy labels. Specifically, we first employ a Gaussian mixture model (GMM) on per-sample loss values to partition the dataset into clean and noisy subsets. Currently, we perform spectral-spatial embedding graph learning to capture structural information and apply label propagation to correct erroneous annotations. Subsequently, we integrate a selective-relaxed contrastive module to aggregate low-confidence noisy samples into a super-class, increasing their affinity with the correct category and mitigating noise-induced errors. Meanwhile, high-confidence clean samples are employed in a supervised manner to generate reliable positive and negative pairs for robust representation learning. Extensive experiments on several benchmark datasets with varying noise levels demonstrate that SRCL substantially outperforms state-of-the-art approaches by effectively reutilizing noisy samples and significantly improving model effectiveness.
External IDs:dblp:journals/pr/WangNWTYWN26
Loading