PAPR Up-close: Close-up Neural Point Rendering without Holes

Published: 05 Nov 2025, Last Modified: 30 Jan 20263DV 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Novel Close-up View Synthesis, Neural Point Rendering, Proximity Attention Point Rendering
Abstract: Point-based representations have recently gained popularity in neural rendering. While they offer many advantages, rendering them from close-up views often results in holes. In splatting-based neural point renderers, these are caused by gaps between different splats, which cause many rays to not intersect with any splat when viewed close-up. A different line of work uses attention to estimate each ray's intersection by interpolating between nearby points. Our work builds on one such method, known as Proximity Attention Point Rendering (PAPR), which learns parsimonious and geometrically accurate point representations. While in principle PAPR can fill holes by learning to interpolate between nearby points appropriately, PAPR also produces holes when rendering close-up, as the intersection point is often predicted incorrectly. We analyze this phenomenon and propose two novel solutions: a method for dynamically selecting nearby points to a ray for interpolation, and a robust attention method that better generalizes to local point configuration around unseen rays. These significantly reduce the prevalence of holes and other artifacts in close-up rendering compared to recent neural point renderers.
Supplementary Material: pdf
Submission Number: 309
Loading