High Dynamic Range Novel View Synthesis with Single Exposure

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Abstract: High Dynamic Range Novel View Synthesis (HDR-NVS) aims to establish a 3D scene HDR model from Low Dynamic Range (LDR) imagery. Typically, multiple-exposure LDR images are employed to capture a wider range of brightness levels in a scene, as a single LDR image cannot represent both the brightest and darkest regions simultaneously. While effective, this multiple-exposure HDR-NVS approach has significant limitations, including susceptibility to motion artifacts (e.g., ghosting and blurring), high capture and storage costs. To overcome these challenges, we introduce, for the first time, the single-exposure HDR-NVS problem, where only single exposure LDR images are available during training. We further introduce a novel approach, Mono-HDR-3D, featuring two dedicated modules formulated by the LDR image formation principles, one for converting LDR colors to HDR counterparts, and the other for transforming HDR images to LDR format so that unsupervised learning is enabled in a closed loop. Designed as a meta-algorithm, our approach can be seamlessly integrated with existing NVS models. Extensive experiments show that Mono-HDR-3D significantly outperforms previous methods. Source code is released at https://github.com/prinasi/Mono-HDR-3D.
Lay Summary: This paper introduces a new way to create high-quality 3D images with rich lighting details using ordinary photographs. Traditional methods require taking multiple photos of the same scene at different brightness levels (like adjusting your phone camera’s settings for dark and bright areas) to capture both shadows and highlights. However, this approach often leads to blurry or ghost-like artifacts when objects move between shots, and it demands more storage and effort. Our solution, called Mono-HDR-3D, eliminates these issues by working with just a single photo per viewpoint. Imagine taking one snapshot in a dimly lit room and still capturing all the details—from the glowing lamp to the darkest corners—without any extra steps. We achieve this through two smart tools: one enhances colors to mimic high-dynamic-range effects, while the other cross-checks results by converting them back to normal photo quality. This self-correcting loop ensures accuracy without needing special equipment. Unlike previous techniques, our method integrates seamlessly with existing 3D reconstruction methods and produces sharper, more realistic results. Experiments show it outperforms existing approaches, making advanced 3D imaging more accessible for applications like virtual reality, robotics, or even everyday photography where lighting conditions are challenging.
Link To Code: https://github.com/prinasi/Mono-HDR-3D
Primary Area: Deep Learning->Algorithms
Keywords: High Dynamic Range, Novel View Synthesis, Low Dynamic Range, Data Synthesis
Submission Number: 1339
Loading