3D Neural Light Quantum Fields for Photo-realistic 3D Reconstruction from 2D Images

Published: 06 May 2025, Last Modified: 06 May 20262025 IEEE Conference on Artificial Intelligence (CAI)EveryoneCC BY 4.0
Abstract: Rapid advancements in 3D reconstruction have led to numerous innovative technologies that greatly enhance the synthesis of novel views. However, most of these methods remain limited to static scenes and rarely address the challenges posed by dynamic environments. Additionally, many current approaches rely on capturing 360-degree views of individual objects, which is often impractical, as obtaining multiple viewpoints of a scene simultaneously is seldom feasible in real-world scenarios. In fields such as autonomous driving, robotic vision, and drone operations, environments are rarely static, while dynamic interactions are the norm. Meanwhile, enabling models to capture and interpret object movements accurately is essential for achieving effective 3D reconstruction of the natural world. To overcome these challenges, we introduce 3D Neural Light Quantum Fields (3D-LQF), a novel framework that advances static and dynamic 3D reconstruction. 3D-LQF leverages light’s probabilistic properties to enhance visual realism and reconstruction precision by integrating time-sequential modeling with quantum photon probability fields. Experimental results demonstrate that 3D-LQF surpasses the performance of existing state-of-the-art methods in static scene reconstruction and also excels in handling dynamic scenes with real-time requirements. These findings position 3D-LQF as a promising solution for real-time 3D reconstruction in dynamic environments and diverse real-world scenarios.
Loading