Abstract: This paper presents a deep learning framework for imputing missing values caused by clouds in optical remote sensing imagery. Our framework utilizes recent deep learning models, including a Denoising Autoencoder (DAE) and a Conditional Generative Adversarial Network (cGAN). Initially, a denoising autoencoder is applied to the input images, treating cloud pixels as noise and effectively removing them without requiring explicit cloud masks. Subsequently, this denoised image is processed using a patch-wise blending technique to enhance spectral details. Finally, a cGAN is employed to further refine the quality of the blended images, ensuring improved visual fidelity. Experimental results on the modified Sen2 MTC Old dataset demonstrate our method’s effectiveness in removing cloud cover, achieving competitive performance in terms of high Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR). This research offers a valuable tool for cloud imputation in the remote sensing community, increasing data density for downstream machine learning applications like land cover mapping and environmental monitoring.
External IDs:dblp:conf/icdm/YarramsettiJLLV24
Loading