Sorted Texture-Aware Glance and Gaze Network for Hyperspectral Image Classification With Low Training Samples

Published: 01 Jan 2025, Last Modified: 31 Jul 2025IEEE Trans. Geosci. Remote. Sens. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Hyperspectral images (HSIs) provide a wealth of information surpassing human visual capabilities, enabling precise identification of remote sensing targets. However, it faces significant challenges, including insufficient long-range dependency modeling, difficulties in data collection, and the tendency of models to get trapped in local optima during training. To overcome these obstacles, we present the sorted texture-aware glance and gaze network (ST-GGNet) tailored for HSI classification. First, we propose the glance and gaze attention (GGA) mechanism, which employs feature interaction-based long-term modeling to minimize information loss across spectral bands and focus on critical land cover features within HSI. Subsequently, the sorted texture-aware module (STM) is introduced to deeply mine and efficiently utilizes detailed texture and spectral information, thereby enhancing accuracy even with limited training data. Additionally, we propose the budding growth optimization algorithm, budding growth optimizer (BGO), which integrates a budding growth mechanism to help the model discover better solutions, boosting optimization and classification performance. Experimental evaluations conducted on four public HSI datasets—Pavia University, Salinas, Houston, and WHU-Longkou—demonstrate the superior performance of ST-GGNet compared to nine state-of-the-art (SOTA) classification methods. Specifically, under limited training samples, ST-GGNet achieves overall accuracies (OAs) of 99.42%, 96.88%, 96.86%, and 97.74%; average accuracies (AAs) of 98.90%, 98.01%, 97.07%, and 92.48%; and Kappa coefficients of 99.24%, 96.53%, 96.59%, and 97.03%, respectively. The findings reveal that ST-GGNet not only maintains strong robustness and generalization but also effectively suppresses noise and excels at distinguishing spatially similar adjacent land covers, especially in low-sample scenarios, consistently outperforming existing SOTA methods. We have released our code and models at https://github.com/Pluviophile-sy/ST-GGNet
Loading