An interpretable bilateral detail optimization deep unfolding network for pansharpening

Published: 01 Jan 2025, Last Modified: 11 Apr 2025Neurocomputing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Pansharpening aims to acquire high-resolution multispectral (HRMS) images for many remote sensing applications. Deep unfolding networks (DUNs) have achieved great success in recent years, combining the interpretability of model-based methods with the feature representation ability of deep learning-based methods. However, the existing DUNs suffer from spectral and spatial distortions due to underutilization of the intrinsic properties of source images and severe information loss in the signal flow. To address the above issues, we first propose two source image-guided detail priors to formulate a novel variational optimization model for multispectral pansharpening, which can alleviate information loss across stages and enhance the spectral and spatial fidelity of fusion images. Different from the existing DUNs for multispectral pansharpening, our optimization objective is spectral and spatial detail information, which is easier to optimize and boost the accuracy of the increased depth network. Then this model is solved by the half-quadratic splitting method and unfolded into an interpretable bilateral detail optimization deep unfolding network (BDODUN), which consists of reconstruction modules and prior modules corresponding to the iterative process. In addition, a novel loss function is proposed to enhance our model’s performance and adaptively weight the results of two branches, effectively mitigating both spectral and spatial distortions. Extensive experiments validate that our method is superior to other state-of-the-art methods.
Loading