Abstract: The majority of optical images acquired via spaceborne remote sensing are affected by clouds. Recent advances in cloud removal combine multimodal data with deep neural networks recovering the affected areas. To relax the requirements on the data the network is trained on previous approaches utilized generative models no longer necessitating strict pixel-wise correspondences between cloudy input and cloud-free target images. However, such models are often-times prone to fiction, i.e. the generation of content systematically differing from the structure of the target images. In this work we combine the fusion of optical and radar imagery with the advantages of generative models trainable on unpaired optical data, while reducing fiction by reconstructing optical information only where it need be-over cloud-covered areas. We evaluate our approach qualitatively and quantitatively and demonstrate its effectiveness.
0 Replies
Loading