A Deep Network Based on Multiscale Spectral-Spatial Fusion for Hyperspectral Classification

Published: 01 Jan 2018, Last Modified: 13 Nov 2024KSEM (2) 2018EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we propose a deep network based on multiscale spectral-spatial fusion (MSS-Net) for Hyperspectral Image (HSI) classification. For the purpose of extracting better joint spectral-spatial features, the proposed network adopts multiscale spectral-spatial fusion method because different scale regions contain different spatial structure, texture features and more abundant neighborhood correlation which are helpful for classification. For every scale of input, we take the 3-D cubes from the raw data to the spatial and spectral learning module respectively. These two learning modules can extract the features with more abundant and original spectral-spatial correlation from the 3-D raw input data and then these features are combined as fusion spectral-spatial features. And we can get multiscale fusion spectral-spatial features which are fed to the two consequent residual learning block. Every residual block contains two 3-D convolutional layers and it can make full use of fusion features to learn more discriminative and high-level features. Furthermore, it also can help the network maintain higher accuracy when the network is deeper. After residual learning, multiscale fusion spectral-spatial features are concatenated and sent to fully convolutional layer for classification. The validation of our method is proved on three HSI data sets and the experimental results show that our method outperforms the other state-of-the-art methods.
Loading