Abstract: With the rapid advancement of deep generative technologies, remote sensing imagery is increasingly susceptible to forgery and manipulation. However, forged remote sensing images often exhibit high visual consistency between the foreground and background, along with blurred edges, which substantially complicates the localization of forgery regions. To address this challenge, we propose a foreground–edge collaboration-driven network (FECDNet) for the precise localization of forged regions. The FECDNet employs a dual-stream feature modeling architecture, with a backbone built using our design wavelet pyramid convolution (WPC) block. This block performs two wavelet transforms (WTs), effectively expanding the receptive field and modeling forgery traces across different frequencies, thereby significantly enhancing the model’s capability to identify and represent forgery regions. Subsequently, we further propose a multirange group fusion (MRGF) module to integrate fine-grained textures, local structures, and broad semantic information from both streams, enabling the full mining and fusion of dual-stream forgery features. Finally, the designed multilevel feature decoupling (MLFD) module and foreground–edge attention guidance (FEAG) module explicitly decouple and collaboratively guide foreground and edge features, thereby improving the discriminative representation and localization accuracy of forged regions. Additionally, based on the HRCUS satellite dataset, we construct a new forged remote sensing dataset, Fake-HRCUS, using generative models, providing a valuable benchmark for future research. Experimental results demonstrate that FECDNet outperforms existing state-of-the-art methods across three benchmark datasets, verifying its effectiveness and generalization capability. The code and dataset are available at https://github.com/NjustHGWei/FECDNet
External IDs:doi:10.1109/tgrs.2026.3659253
Loading