FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors

15 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural rendering, Novel view synthesis, Few-shot NeRF
Abstract: Neural Radiance Fields (NeRF) face significant challenges in few-shot scenarios, particularly due to overfitting and long training times for high-fidelity rendering. While current approaches like FreeNeRF and SparseNeRF use frequency regularization or pre-trained priors, they can be limited by complex scheduling or potential biases. We introduce FrugalNeRF, a novel few-shot NeRF framework that leverages weight-sharing voxels across multiple scales to efficiently represent scene details. Our key contribution is a cross-scale geometric adaptation training scheme that selects pseudo ground truth depth based on reprojection error from both training and novel views across scales. This guides training without relying on externally learned priors, allowing FrugalNeRF to fully utilize available data. While not dependent on pre-trained priors, FrugalNeRF can optionally integrate them for enhanced quality without affecting convergence speed. Our method generalizes effectively across diverse scenes and converges more rapidly than state-of-the-art approaches. Our experiments on standard LLFF, DTU, and RealEstate-10K datasets demonstrate that FrugalNeRF outperforms existing few-shot NeRF models, including those using pre-trained priors, while significantly reducing training time, making it a practical solution for efficient and accurate 3D scene reconstruction.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 891
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview