Abstract: Point clouds, being sparse samples from a surface, inherently produce blur and holes in rendered images. Although previous methods tackle hole-filling by representing point clouds in 3D neural representation and utilizing neural decoders or neural radiance fields for rendering, they lack a unified representation of global and local information, restricting sensitivity to texture changes and rendering quality. To address this limitation, we present Point As Gaussian (PAG), which integrates a hybrid neural radial basis function (H-NRBF) to enable the neural network to capture both local and global features of the point cloud, consequently achieving hole-filling and improving the rendering quality of local details. Moreover, drawing inspiration from recent 3D Gaussian Splatting, we adopt 3D Gaussian as the expression of the radiance field predicted by our model, allowing the model to concentrate on learning point-based features. Extensive experiments on the synthetic dataset ShapeNet and the scanned dataset Google Scanned Objects demonstrate that our model can render the input point cloud into a photo-realistic image without additional optimization or fine-tuning. Additionally, our method offers an alternative approach for mesh reconstruction from point clouds by rendering images from point clouds and subsequently utilizing image-based reconstruction algorithms.
Loading