Keywords: Dynamic scene reconstruction, Neural radiance field, Gaussian splatting
TL;DR: Given unsynchronized videos, Sync-4DRF optimizes learnable time offsets to calibrate misaligned time-dependent embeddings jointly with the radiance field, enabling successful dynamic scene reconstruction.
Abstract: Recent advancements in 4D scene reconstruction using dynamic NeRF and 3DGS have demonstrated the ability to represent dynamic scenes from multi-view videos. However, they fail to reconstruct the dynamic scenes and struggle to fit even the training views in unsynchronized settings. It happens because they employ a single latent embedding for a frame, while the multi-view images at the same frame were actually captured at different moments. To address this limitation, we introduce time offsets for individual unsynchronized videos and jointly optimize the offsets with the field. By design, our method is applicable for various baselines, even regardless of the types of radiance fields. We conduct experiments on the common Plenoptic Video Dataset and a newly built Unsynchronized Dynamic Blender Dataset to verify the performance of our method. Code will be available: https://github.com/seoha-kim/Sync-4DRF
Submission Number: 35
Loading