Luminance-Preserving Visible and Near-Infrared Image Fusion Network with Edge Guidance

Published: 01 Jan 2023, Last Modified: 15 May 2025ICIP 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Near-infrared (NIR) images and visible (VIS) images can provide mutually complementary information for each other, thus the fusion of the two modalities can create images of high quality even in adverse conditions. However, the luminance of NIR and VIS images may be inconsistent in some regions, resulting in color distortion and unrealistic appearance in the fused images. The existing methods perform poorly at luminance retention. Aiming at the problem and based on deep learning framework, we propose an edge-guided method which can be applied to the image fusion network. Edge maps are utilized as prior knowledge of images to boost the performance of the neural network. Additionally, we propose a luminance-preserving loss function combined with max-edge loss to further improve the image quality. Experimental results show the superiority of our method.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview