CFCA-SET: Coarse-to-Fine Context-Aware SAR-to-EO Translation With Auxiliary Learning of SAR-to-NIR Translation

Published: 01 Jan 2023, Last Modified: 05 Nov 2024IEEE Trans. Geosci. Remote. Sens. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Satellite synthetic aperture radar (SAR) images are immensely valuable because they can be obtained regardless of weather and time conditions. However, SAR images have fatal noise and less contextual information, thus making them harder and less interpretable. Thus, translation of SAR to electro-optical (EO) images is highly required for easier interpretation. In this article, we propose a novel coarse-to-fine context-aware SAR-to-EO translation (CFCA-SET) framework and a misalignment-resistant (MR) loss for the misaligned pairs of SAR-EO images. With our auxiliary learning of SAR-to-near-infrared translation, CFCA-SET consists of a two-stage training: 1) the low-resolution SAR-to-EO translation is learned in the coarse stage via a local self-attention module that helps diminish the SAR noise and 2) the resulting output is used as guidance in the fine stage to generate the SAR colorization of high resolution. Our proposed auxiliary learning of SAR-to-NIR translation can successfully lead CFCA-SET to learn distinguishable characteristics of various SAR objects with less confusion in a context-aware manner. To handle the inevitable misalignment problem between SAR and EO images, we newly designed an MR loss function. Extensive experimental results show that our CFCA-SET can generate more recognizable and understandable EO-like images compared to other methods in terms of nine image quality metrics. Our CFCA-SET surpasses the state-of-the-art methods for two (QXS and CASET) datasets with the improvements: PSNR (3.6%, 29%), ERGAS (7.4%, 30%), SSIM (15%, 15%), SAM (21%, 38%), $D_{S}$ (16%, 13%), QNR (1.5%, 3.1%), CHD (18%, 12%), LPIPS (4.2%, 8%), and FID (9.0%, 33%).
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview