Keywords: Novel View Synthesis, Point Clouds, Neural Rendering, Implicit Representations
TL;DR: We propose a new representation that implicitly models a point cloud to improve image quality for point-based radiance field methods.
Abstract: We introduce a new approach for reconstruction and novel view synthesis of unbounded real-world scenes.
In contrast to previous methods using either volumetric fields, grid-based models, or discrete point cloud proxies, we propose a hybrid scene representation, which *implicitly* encodes the geometry in a continuous octree-based probability field and view-dependent appearance in a multi-resolution hash grid.
This allows for extraction of arbitrary *explicit* point clouds, which can be rendered using rasterization.
In doing so, we combine the benefits of both worlds and retain favorable behavior during optimization:
Our novel implicit point cloud representation and differentiable bilinear rasterizer enable fast rendering while preserving the fine geometric detail captured by volumetric neural fields.
Furthermore, this representation does not depend on priors like structure-from-motion point clouds.
Our method achieves state-of-the-art image quality on common benchmarks.
Furthermore, we achieve fast inference at interactive frame rates, and can convert our trained model into a large, explicit point cloud to further enhance performance.
Supplementary Material: zip
Submission Number: 311
Loading