3D Reconstruction and Novel View Synthesis of Indoor Environments based on a Dual Neural Radiance Field

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Simultaneously achieving 3D reconstruction and novel view synthesis for indoor environments has widespread applications but is technically very challenging. State-of-the-art methods based on implicit neural functions can achieve excellent 3D reconstruction results, but their performances on new view synthesis can be unsatisfactory. The exciting development of neural radiance field (NeRF) has revolutionized novel view synthesis, however, NeRF-based models can fail to reconstruct clean geometric surfaces. %In this paper, We have developed a dual neural radiance field (Du-NeRF) to simultaneously achieve high-quality geometry reconstruction and view rendering. Du-NeRF contains two geometric fields, one derived from the SDF field to facilitate geometric reconstruction and the other derived from the density field to boost new view synthesis. One of the innovative features of Du-NeRF is that it decouples a view-independent component from the density field and uses it as a label to supervise the learning process of the SDF field. This reduces shape-radiance ambiguity and enables geometry and color to benefit from each other during the learning process. Extensive experiments demonstrate that Du-NeRF can significantly improve the performance of novel view synthesis and 3D reconstruction for indoor environments and it is particularly effective in constructing areas containing fine geometries that do not obey multi-view color consistency.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Experience] Art and Culture
Relevance To Conference: We present a method for high-fidelity reconstruction and rendering of indoor scenes, utilizing data captured by RGBD cameras. Our approach can be practically applied or integrated into various distance education and metaverse-related digital city projects, which rely on real-time data collected by multiple RGBD cameras to simultaneously reconstruct the geometry of environments, such as classrooms or residences, and render the corresponding colors. Furthermore, the concurrent temporal, qualitative, and spatial trade-offs exhibited by our method enable its application in real-world conservation scenarios, including 3D reconstruction of cultural heritage sites and artifacts inaccessible to human beings.
Supplementary Material: zip
Submission Number: 3312
Loading