Keywords: Novel View Synthesis, 3D Gaussian Splatting, Neural Radiance Fields, Reconstruction
TL;DR: We leverage novel view synthesis to improve novel view synthesis via an iterative approach that creates pseudo views to improve the scene coverage and adds their most certain parts to the training views.
Abstract: Recent NeRF and Gaussian Splatting methods have shown remarkable reconstruction and novel view synthesis (NVS) capabilities, but require a substantial number of images of the scene from diverse viewpoints to render high-quality novel views. With fewer images, they struggle with correctly triangulating the underlying 3D geometry and converging to a sub-optimal solution (e.g., with floaters or blurry renderings). In this paper, we propose Re-Nerfing, a general approach that leverages NVS itself to tackle this convergence problem. Using an already optimized scene representation model, we generate novel views derived from existing perspectives and use these to augment the training data of a second model. We add the augmented views to improve the scene coverage and mask out their uncertain areas to enhance the quality of the training signal. This introduces additional multi-view constraints and allows the second model to converge to a better solution. With Re-Nerfing, we introduce an iterative paradigm that achieves significant improvements upon multiple pipelines based on NeRF and 3D Gaussian Splatting in sparse and highly-sparse view settings of the mip-NeRF 360, Tanks and Temples, and LLFF datasets. Notably, Re-Nerfing does not require prior knowledge or extra supervision signals, making it a flexible and practical enhancement to any learnable NVS pipeline.
Submission Number: 28
Loading