SCoRF: Single-stage convolutional radiance fields for effective 3D scene representation

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Computer Vision, Computational Photography, Novel View Synthesis
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Novel view synthesis captured from multiple images is a critical research topic in computer vision and computational photography due to their wide range of applications. Neural radiance fields significantly improve performance by optimizing continuous volumetric scene functions using a multi-layer perceptron. Although neural radiance fields and their modifications provide high-quality scenes, they have various limitations in representing color and density due to their hierarchical architecture comprising coarse and fine networks. They also require numerous parameters and considerable training time, and generally do not consider local and global relationships between samples on a ray. This paper proposes a unified single-stage paradigm that jointly learns relative position on three-dimensional rays and their relative color and density for complex scenes using a convolutional neural network to reduce noise and irrelevant features and preventing overfitting. Experimental results including ablation tests verify the proposed approach superior robustness to current state-of-the-art models for synthesizing novel views.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5091
Loading