Broadening View Synthesis of Dynamic Scenes from Constrained Monocular Videos

Published: 05 Nov 2025, Last Modified: 30 Jan 20263DV 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Dynamic Neural Radiance Fields (NeRF), Novel View Synthesis, Monocular 3D Reconstruction, Gaussian Splatting
TL;DR: Propose a novel dynamic NeRF framework extends reliable view synthesis to extreme angles from monocular input using pseudo-novel view supervision and introduces SynDM, the first GTA V-based dataset for evaluating large-angle view rendering.
Abstract: In dynamic Neural Radiance Fields (NeRF) systems, state-of-the-art novel view synthesis methods often fail under significant viewpoint deviations, producing unstable and unrealistic renderings. To address this, we introduce Expanded Dynamic NeRF (ExpanDyNeRF), a monocular NeRF framework that leverages Gaussian splatting priors and a pseudo-ground-truth generation strategy to enable realistic synthesis under large-angle rotations. ExpanDyNeRF optimizes density and color features to improve scene reconstruction from challenging perspectives. We also present the Synthetic Dynamic Multiview (SynDM) dataset—the first synthetic multiview dataset for dynamic scenes with explicit side-view supervision—created using a custom GTA V-based rendering pipeline. Quantitative and qualitative results on SynDM and real-world datasets demonstrate that ExpanDyNeRF significantly outperforms existing dynamic NeRF methods in rendering fidelity under extreme viewpoint shifts. Further details are provided in the supplementary materials.
Supplementary Material: zip
Submission Number: 214
Loading