Dual-Semantic Graph Convolution Network for Hyperspectral Image Classification With Few Labeled Samples
Abstract: In recent years, superpixel-based graph convolutional networks (GCNs) have drawn increasing attention within the hyperspectral image (HSI) classification community. Due to the high-dimensional property of HSI, establishing a high-quality and accurate initial graph is still a great challenge for the superpixel-based GCN methods. In addition, the lack of high-level semantics within the superpixel-based node features leads to poor classification performance of the model, especially in scenarios with limited labeled samples. To tackle these problems, we propose a novel approach called the dual-semantic graph convolution network (DSGCN) for HSI classification in this article. Specifically, our method employs superpixel segmentation to construct graph nodes with semantic structure information, treating each superpixel in the HSI as a node within the graph. We design a superpixel-level autoencoder that integrates with the initial graph to update the edge weights. With the learnable edge weights, our model can adaptively learn robust spatial semantic (SS) information from HSI. Additionally, we introduce a spectrum-flow (SF) module to extract global spectral semantic variation information. To further enhance the nonlinearity capability of GCN, we replace the traditional linear layer with a novel network layer referred to as Kolmogorov Arnold networks (KANs) during the node representation phase. In addition, we develop a memory-efficient residual spectral attention (MERSA) module that adapts to the full-batch training manner in the convolutional neural network (CNN) branch to supplement fine-grained pixel-level features. Extensive experiments conducted on four benchmark datasets demonstrate that our proposed DSGCN significantly outperforms several state-of-the-art methods, particularly when using a small amount of labeled data.
Loading