Neural 4D Scene Reconstruction with Multiple One-Shot Scanning Systems

Published: 05 Nov 2025, Last Modified: 30 Jan 20263DV 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural representation, Structured light, 3D scan, Near-Light Photometric Stereo
TL;DR: We propose an active lighting–based 3D reconstruction method using neural implicit representation, exploiting illumination constraints and multiplexed lighting to handle sparse views, textureless scenes, and dynamic low-SNR environments.
Abstract: Recently, 3D reconstruction from multiview stereo (MVS) has advanced significantly with the introduction of neural implicit representation methods, which estimate voxel densities or signed distance fields (SDFs) to describe the 3D structure of a scene. Although such neural-based methods typically require a large number of captured images to estimate dense volumetric information during training, developing systems that can recover the 3D shape of moving objects using only a small number of stationary cameras remains highly demanding and challenging. To address the issue of sparse views, various active lighting techniques have been proposed. However, the problem remains inherently difficult, particularly when attempting to capture the complete shape of an object with a wide baseline. In this paper, we propose a novel approach that combines active lighting with photometric stereo (PS) using neural representations. Additionally, we introduce a multiplexed illumination technique that captures the entire shape of an object in a single shot. Although this results in a low signal-to-noise ratio (SNR), our method also addresses this issue. The advantages of our technique are demonstrated through real-world experiments, showcasing its ability to capture a 4D scene.
Supplementary Material: zip
Submission Number: 181
Loading