DSSFT: Dual branch spectral-spatial feature fusion transformer network for hyperspectral image unmixing
Abstract: Hyperspectral Unmixing (HU) plays a crucial role in advancing hyperspectral image (HSI) analysis. Its goal is to decompose mixed pixels into distinct spectral signatures, called endmembers, and to estimate the fractional abundances of each endmember across the image. In recent years, deep learning (DL) techniques, especially Convolutional Neural Networks (CNN)-based autoencoders (AEs), have attracted significant attention in the HU field. CNNs exhibit remarkable performance in HU, primarily due to their adeptness at capturing local contextual features. However, CNNs face challenges in modeling long-range dependencies and the inherent sequential characteristics of HSI data. Consequently, continuous improvement of CNN-based models for HSI unmixing becomes challenging, as they cannot fully exploit HSI intricate and continuous spectral information. This paper introduces an innovative Dual Branch Spectral-Spatial Feature Fusion Transformer Network for hyperspectral unmixing (DSSFT). The proposed model integrates the transformer architecture into the HSI processing pipeline to better accommodate the sequential attributes of HSI data. The two parallel branches of transformers are dedicated to learning comprehensive spectral and spatial characteristics, respectively. Subsequently, the features learned through these parallel branches are effectively fused. This fusion explicitly measures the significance of combined spectral and spatial features, substantially improving HU performance. The proposed DSSFT method is extensively evaluated on one synthetic and three real hyperspectral datasets. The results confirm its superior performance when compared to state-of-the-art methods.
Loading