Abstract: Hyperspectral (HS) pansharpening aims to fuse high-spatial-resolution panchromatic (PAN) images with low-spatial-resolution HS (LRHS) images to generate high-spatial-resolution HS (HRHS) images. Due to the lack of consideration for the modal feature difference between PAN and LRHS images, most deep learning-based methods suffer from spectral and spatial distortions in the fusion results. In addition, most methods use upsampled LRHS images as network input, resulting in spectral distortion. To address these issues, we propose a dual-stage feature correction fusion network (DFCFN) that achieves an accurate fusion of PAN and LRHS images by constructing two fusion subnetworks: a feature correction compensation fusion network (FCCFN) and a multiscale spectral correction fusion network (MSCFN). Based on the lattice filter structure, FCCFN is designed to obtain the initial fusion result by mutually correcting and supplementing the modal features from PAN and LRHS images. To suppress spectral distortion and obtain fine HRHS results, MSCFN based on 2-D discrete wavelet transform (2D-DWT) is constructed to gradually correct the spectral features of the initial fusion result by designing a conditional entropy transformer (CE-Transformer). Extensive experiments on three widely used simulated datasets and one real dataset demonstrate that the proposed DFCFN achieves significant improvements in both spatial and spectral quality metrics over other state-of-the-art (SOTA) methods. Specifically, the proposed method improves the spectral angle mapping (SAM) metric by 6.4%, 6.2%, and 5.3% compared to the second-best comparison approach on the Pavia Center, Botswana, and Chikusei datasets, respectively. The codes are made available at: https://github.com/EchoPhD/DFCFN.
Loading