LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce a new task, novel view synthesis for LiDAR sensors. While traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views, they fall short of producing accurate and realistic LiDAR patterns because the renderers rely on explicit 3D reconstruction and exploit game engines, that ignore important attributes of LiDAR points. We address this challenge by formulating, to the best of our knowledge, the first differentiable end-to-end LiDAR rendering framework, LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points. However, simply employing NeRF cannot achieve satisfactory results, as it only focuses on learning individual pixels while ignoring local information, especially at low texture areas, resulting in poor geometry. To this end, we have taken steps to address this issue by introducing a structural regularization method to preserve local structural details. To evaluate the effectiveness of our approach, we establish an object-centric multi-view LiDAR dataset, dubbed NeRF-MVL. It contains observations of objects from 9 categories seen from 360-degree viewpoints captured with multiple LiDAR sensors. Our extensive experiments on the scene-level KITTI-360 dataset, and on our object-level NeRF-MVL show that our LiDAR-NeRF surpasses the model-based algorithms significantly.
Primary Subject Area: [Content] Media Interpretation
Secondary Subject Area: [Content] Multimodal Fusion, [Generation] Generative Multimedia
Relevance To Conference: In various real-world applications, such as autonomous driving, multimedia systems often incorporate multiple perception sensors, specifically LiDAR and cameras. However, the previous methods mainly focus on generating camera images, and generating novel LiDAR views remains unexplored. Despite the 3D nature of LiDAR, this task remains challenging, as LiDARs only provide a partial view of the scene, corrupted by various attributes related to the LiDAR physical modeling. In this work, we propose a differentiable end-to-end LiDAR rendering framework, LiDAR-NeRF, leveraging a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points. Moreover, we contribute to the community by establishing the first object-centric multi-view LiDAR dataset, NeRF-MVL.
Supplementary Material: zip
Submission Number: 4217
Loading