Keywords: Neural Radiance Fields, Few-Shot 3D Reconstruction, Self-Supervised Learning
TL;DR: We apply self-supervised learning to boost performance in few-shot NeRF
Abstract: Recently, neural radiance field (NeRF) has shown remarkable performance in novel view synthesis and 3D reconstruction. However, it still requires abundant high-quality images as input, limiting its applicability in real-world scenarios. To overcome this limitation, recent works have focused on training NeRF only with sparse viewpoints by giving additional regularizations. However, due to the under-constrained nature of the task, solely using additional regularization is not enough to prevent the model from overfitting to sparse viewpoints. In this paper, we propose a novel framework, dubbed self-evolving neural radiance fields (SE-NeRF), that applies a self-training paradigm to NeRF to address these problems. We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene by training the student with additional pseudo labels generated from the teacher. By distilling ray-level pseudo labels using distinct distillation schemes for reliable and unreliable rays obtained with our novel reliability estimation method, we enable NeRF to learn a more accurate and robust geometry of the 3D scene. We show that applying our self-training framework to existing NeRF models improves the quality of the rendered images and achieves state-of-the-art performance.
Submission Number: 15
Loading