A Joint Multiscale Graph Attention and Classify-Driven Autoencoder Framework for Hyperspectral Unmixing
Abstract: Deep learning has recently gained popularity in hyperspectral unmixing (HU) and typical methods involve convolutional neural network-based (CNN-based) and autoencoding-based methods. However, most existing methods are usually confined to capturing local features in hyperspectral images (HSIs) while neglecting long-range dependency information on spatial position in HSIs, where long-range dependency on spatial positions means the correlations between spatial pixels or regions. Graph neural networks (GNNs) have recently shown great potential in various fields, which model complex spatial relationships and interactions in data. Therefore, this article develops a joint multiscale graph attention and classify-driven autoencoder (MSGA-CD) framework for HU. Its core is to construct a multiscale graph attention abundance (MSGAA) module, a local-global abundance fusion (LGAF) module, and an abundance-classify-driven endmember decoder (ACDE) module. Concretely, MSGAA incorporates a multiscale strategy into the graph attention network (GAT) to extract diverse long-range dependency information on spatial locations in HSIs from different levels and obtain global abundances. Afterward, LGAF integrates local abundance obtained by CNN and global abundance by MSGAA, achieving a more precise abundance representation. Moreover, ACDE clusters all HSI pixel features into various endmember categories using abundance fractions and takes them as priors to drive endmember learning, effectively improving the accuracy of endmember extraction. Finally, the abundance and endmember matrices are trained simultaneously by constraining their dependent relationship through a joint loss. Experiments reveal that MSGA-CD outperforms state-of-the-art methods, offering a promising method for HU.
Loading