DGNR: Density-Guided Neural Point Rendering of Large Driving Scenes

Published: 01 Jan 2025, Last Modified: 12 Apr 2025IEEE Trans Autom. Sci. Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Despite the recent success of Neural Radiance Field (NeRF), it is still challenging to render large-scale driving scenes with long trajectories, particularly when the rendering quality and efficiency are in high demand. Existing methods for such scenes usually involve with spatial warping, geometric supervision from zero-shot normal or depth estimation, or scene division strategies, where the synthesized views are often blurry or fail to meet the requirement of efficient rendering. To address the above challenges, this paper presents a novel framework that learns a density space from the scenes to guide the construction of a point-based renderer, dubbed as DGNR (Density-Guided Neural Rendering). In DGNR, geometric priors are no longer needed, which can be intrinsically learned from the density space through volumetric rendering. Specifically, we make use of a differentiable renderer to synthesize images from the neural density features obtained from the learned density space. A density-based fusion module and geometric regularization are proposed to optimize the density space. By conducting experiments on a widely used autonomous driving dataset, we have validated the effectiveness of DGNR in synthesizing photorealistic driving scenes and achieving real-time capable rendering. Our project page is available at https://github.com/JOP-Lee/DGNR-Rendering. Note to Practitioners—While Neural Radiance Field (NeRF) has been gaining attraction, it is still challenging to create highly detailed, efficient renderings of large driving scenes. Current methods often resort to spatial warping, geometric guidance from tools like zero-shot normal or depth estimates, or dividing the scene into smaller parts. Unfortunately, these techniques can result in blurred images or fail to meet efficiency needs. To solve these challenges, we introduce a learned density space to build a point-based renderer, termed Density-Guided Neural Rendering (DGNR). With DGNR, we no longer need geometric priors because the density space can inherently learn them through volume rendering. Specifically, we use a flexible renderer to create images from the neural density features derived from the learned density space. We have also proposed a density-based fusion module and geometric regularization to optimize the density space. We evaluated DGNR on a popular autonomous driving dataset and found it to be effective in creating realistic driving scenes and capable of real-time rendering. Project page: https://github.com/JOP-Lee/DGNR-Rendering.
Loading