Keywords: 3D computer vision, novel view synthesis, dynamic scene reconstruction, model training acceleration, neural radiance field, NeRF
Abstract: Dynamic neural radiance fields (dynamic NeRFs) have achieved remarkable successes in synthesizing novel views for 3D dynamic scenes. Traditional approaches typically necessitate full video sequences for the training phase prior to the synthesis of new views, akin to replaying a recording of a dynamic 3D event. In contrast, on-the-fly training allows for the immediate processing and rendering of dynamic scenes without the need for pre-training on full sequences, offering a more flexible and time-efficient solution for dynamic scene rendering tasks.In this paper, we propose a highly efficient on-the-fly training algorithm for dynamic NeRFs, named OD-NeRF. To accelerate the training process, our method minimizes the training required for the model at each frame by using: 1) a NeRF model conditioned on multi-view projected colors, which exhibits superior generalization across multiple frames with minimal training ,and 2) a transition and update algorithm that leverages the occupancy grid from the last frame to sample efficiently at the current frame. Our algorithm can achieve an interactive training speed of 10FPS on synthetic dynamic scenes on-the-fly, and a 3$\times$-9$\times$ training speed-up compared to the state-of-the-art on-the-fly NeRF on real-world dynamic scenes.
Supplementary Material: pdf
Submission Number: 122
Loading