Mitigating Texture Bias: A Remote Sensing Super-Resolution Method Focusing on High-Frequency Texture Reconstruction
Abstract: Super resolution (SR) is an ill-posed problem because one low-resolution image can correspond to multiple high-resolution images. High-frequency details are significantly lost in low-resolution images. Existing deep learning-based SR models excel in reconstructing low frequency and regular textures but often fail to achieve high-quality reconstruction of SR high-frequency textures. These models exhibit bias toward different texture regions, leading to imbalanced reconstruction across various areas. To address this issue and reduce model bias toward diverse texture patterns, we propose a frequency-aware SR method that improves the reconstruction of high-frequency textures by incorporating local data distributions. First, we introduce the frequency-aware transformer (FAT), which enhances the capability of transformer-based models to extract frequency domain and global features from remote sensing images. Moreover, we design a local extremum and variance-based loss function, which guides the model to reconstruct more realistic texture details by focusing on local data distribution. Finally, we construct a high-quality remote sensing SR dataset named RSSR25. We also discover that denoising algorithms can serve as an effective enhancement method for existing public datasets to improve model performance. Extensive experiments on multiple datasets demonstrate that the proposed FAT achieves superior perceptual quality while maintaining high-distortion metrics scores compared with state-of-the-art algorithms. The source code and dataset will be publicly available at: https://github.com/fengyanzi/FAT.
Loading