Learning Depth-regularized Radiance Fields from Asynchronous RGB-D Sequences

11 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Neural Radiance Fields
Abstract: Recently it is shown that learning radiance fields with depth rendering and depth supervision can effectively promote the view synthesis quality and convergence. But this paradigm requires input RGB-D sequences to be synchronized, hindering its usage in the UAV city modeling scenario. To this end, we propose to jointly learn large-scale depth-regularized radiance fields and calibrate the mismatch between RGB-D frames. Although this joint learning problem can be simply addressed by adding new variables, we exploit the prior that RGB-D frames are actually sampled from the same physical trajectory. As such, we propose a novel time-pose function, which is an implicit network that maps timestamps to SE(3) elements. Our algorithm is designed in an alternative way consisting of three steps: (1) time-pose function fitting; (2) radiance field bootstrapping; (3) joint pose error compensation and radiance field refinement. In order to systematically evaluate under this new problem setting, we propose a large synthetic dataset with diverse controlled mismatch and ground truth. Through extensive experiments, we demonstrate that our method outperforms strong baselines. We also show qualitatively improved results on a real-world asynchronous RGB-D sequence captured by drones. Codes, data, and models will be made publicly available.
Supplementary Material: zip
Submission Number: 12842
Loading