Dual Graph Convolutional Network for Hyperspectral Images With Spatial Graph and Spectral Multigraph

Published: 01 Jan 2024, Last Modified: 08 Apr 2025IEEE Geosci. Remote. Sens. Lett. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: To accurately represent the graph structure of the pixel nodes in the hyperspectral remote sensing image classification based on graph convolutional networks (GCNs), a spectral multigraph adjacency matrix is presented by the weighted fusion of four single spectral adjacency matrices constructed from four different similarity measures; a strategy of extracting pixel neighborhood spatial features as pixel nodes to construct spatial adjacency matrices is presented, i.e., using Gabor wavelet transform to extract the shallow pixel spatial texture feature to construct the shallow spatial feature adjacency matrix and using a 2-D convolutional neural network (2D-CNN) to extract the deep pixel spatial feature to construct the deep spatial feature adjacency matrix. By combining a shallow texture or deep spatial graph branch with a spectral multigraph branch, a spatial texture feature spectral multigraph dual interactive GCN (STSM-DGCN) and a spatial deep feature spectral multigraph dual interactive GCN (SDSM-DGCN) are designed. The experimental results on three real datasets show that the presented methods can improve the classification accuracy compared to support vector machine (SVM) and K nearest neighbor classifiers, 3D-CNN, SSRN, HybridSN, GCN, FuNet-C, CEGCN, and WFCG models, especially under small-size training samples.
Loading