Abstract: Highlights•A gradient-guided pansharpening framework, GGPNet, is proposed, which leverages gradient features as progressive guidance in the fusion process of PAN and MSI, providing rich spatial texture information.•A novel multi-image cross-attention block is proposed to enhance fusion performance by effectively combining spatial and spectral information. This block facilitates cross-modal alignment and adaptively highlights critical regions, such as edges and textures, ensuring both spatial fidelity and spectral consistency.•Extensive experiments on diverse datasets, including GaoFen-2 (GF2), QuickBird (QB), and WorldView-3 (WV3), along with comparative analysis against state-of-the-art methods, demonstrate the effectiveness of GGPNet both qualitatively and quantitatively.
External IDs:doi:10.1016/j.neucom.2025.131607
Loading