Downstream Task-Aware Cloud Removal for Very-High-Resolution Remote Sensing Images: An Information Loss Perspective
Abstract: Cloud removal (CR) methods have been widely studied and discussed to address the issue of cloud occlusion in Earth observation tasks. Existing CR methods heavily rely on image similarity metrics, such as peak signal-to-noise ratio and structural similarity index measure to evaluate the quality of CR results. However, due to factors including rapid changes in landforms and viewpoint differences between cloudy and reference images, image similarity metrics could be ineffective, even misleading. To address these challenges, this study investigates CR by evaluating whether CR algorithms effectively produce information beneficial for downstream tasks. We introduce CUHKCR-EXT, the first very-high-resolution CR dataset explicitly designed for post-CR downstream task performance assessment. Furthermore, we propose DFCFormer, a dynamic filter-based transformer that generates adaptive kernels conditioned on cloud characteristics, enabling more precise recovery across diverse cloud types within a unified framework. In addition, we design a feature alignment loss that enforces consistency between cloud-removed and reference features at the semantic level, which guides the model to retain landform-relevant information crucial for downstream analysis. Using scene classification as a representative downstream task, we conduct extensive experiments and evaluate performance using both image similarity and information loss metrics. The results demonstrate that the proposed method achieves strong performance across all evaluated metrics. More importantly, the improvements lie not only in image similarity but also in the preservation of task-relevant semantics, which enhances the effective quality of output images for downstream applications rather than merely their visual fidelity.
External IDs:doi:10.1109/jstars.2025.3610641
Loading