Dynamic HDR Radiance Fields via Neural Scene Flow

Published: 14 Sept 2025, Last Modified: 13 Oct 2025ICCV 2025 Wild3DEveryoneRevisionsBibTeXCC BY 4.0
Keywords: High Dynamic Range (HDR) Imaging, Dynamic Radiance Fields (4D), Multi-Exposure Video Reconstruction, 4D Scene Representation, Dynamic HDR Dataset
Abstract: Reliving transient moments captured by a single camera requires reconstructing accurate radiance, geometry, and 3D motion. While significant progress has been made in dynamic 3D scene reconstruction, high-dynamic-range (HDR) radiance fields of dynamic scenes remain difficult to reconstruct. This work introduces HDR-NSFF, a novel approach to reconstructing dynamic HDR radiance fields from a monocular camera with varying exposures. HDR imaging requires multiple LDR images captured at different exposures, but capturing dynamic scenes with alternating exposures introduces challenges such as the correspondence problem, motion inconsistency, color discrepancies, and low frame rates. Here, Neural Scene Flow Fields (NSFF) is used to jointly model scene flow with neural radiance fields, enabling both novel view synthesis and temporal interpolation. NSFF is extended to HDR radiance field reconstruction by modeling learnable explicit camera response functions so that the NSFF and camera response functions can be jointly estimated in challenging dynamic scenes. Since multi-exposure images disrupt applying standard optical flow estimation due to color inconsistency, we mitigate this issue by incorporating DINOv2 semantic features, which provide exposure-invariant object-level priors for motion estimation. By integrating these components, HDR-NSFF effectively reconstructs dynamic HDR radiance fields from single-camera footage, overcoming the limitations of the previous methods and enabling novel view synthesis and high-quality time interpolation in challenging HDR scenarios.
Submission Number: 32
Loading