Multimodal Low-light Image Enhancement with Depth Information

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Low-light image enhancement has been researched several years. However, current image restoration methods predominantly focus on recovering images from RGB images, overlooking the potential of incorporating more modalities. With the advancements in personal handheld devices, we can now easily capture images with depth information using devices such as mobile phones. The integration of depth information into image restoration is a research question worthy of exploration. Therefore, in this paper, we propose a multimodal low-light image enhancement task based on depth information and establish a dataset named **LED** (**L**ow-light Image **E**nhanced with **D**epth Map), consisting of 1,365 samples. Each sample in our dataset includes a low-light image, a normal-light image, and the corresponding depth map. Moreover, for the LED dataset, we design a corresponding multimodal method, which can processes the input images and depth map information simultaneously to generate the predicted normal-light images. Experimental results and detailed ablation studies proves the efficiency of our method which exceeds previous single-modal state-of-the arts methods from relevant field.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: Unlike previous tasks of low-light image enhancement, we additionally introduce depth information as a new modality and propose a multimodal low-light image enhancement task based on depth information. We also proposed a new dataset for low-light image enhancement task, called LED (Low-light Image Enhanced with Depth Map). Our paper investigates how to better utilize multimodal to enhance low-light images more effectively.
Supplementary Material: zip
Submission Number: 1713
Loading