PAPR: Proximity Attention Point Rendering

Published: 21 Sept 2023, Last Modified: 19 Dec 2023NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: point cloud learning, point cloud rendering
TL;DR: A point-based 3D representation and rendering method that can learn scene surface geometry using point cloud from scratch
Abstract: Learning accurate and parsimonious point cloud representations of scene surfaces from scratch remains a challenge in 3D representation learning. Existing point-based methods often suffer from the vanishing gradient problem or require a large number of points to accurately model scene geometry and texture. To address these limitations, we propose Proximity Attention Point Rendering (PAPR), a novel method that consists of a point-based scene representation and a differentiable renderer. Our scene representation uses a point cloud where each point is characterized by its spatial position, influence score, and view-independent feature vector. The renderer selects the relevant points for each ray and produces accurate colours using their associated features. PAPR effectively learns point cloud positions to represent the correct scene geometry, even when the initialization drastically differs from the target geometry. Notably, our method captures fine texture details while using only a parsimonious set of points. We also demonstrate four practical applications of our method: zero-shot geometry editing, object manipulation, texture transfer, and exposure control. More results and code are available on our project website at https://zvict.github.io/papr/.
Submission Number: 8878
Loading