Efficient Large-Scale Scene Point Cloud Upsampling with Implicit Neural Networks and Spatial Hashing

Published: 01 Jan 2025, Last Modified: 04 Nov 2025ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Point cloud upsampling is a critical challenge in 3D vision, particularly for large-scale, real-world data. We propose ASFNet, a novel implicit neural network-based approach that uniquely combines adaptive spatial feature representation with efficient spatial hashing. This method significantly improves both upsampling quality and computational efficiency. ASFNet first encodes the point cloud as an implicit surface, employing dynamic search and spatial hashing to optimize query point locations rapidly. This approach creates a uniform, continuous field around surfaces, enabling high-fidelity upsampling. Experiments on benchmark datasets, including Oakland 3D dataset and VMR-Oakland-v2, demonstrate ASFNet’s superiority. Our method achieves state-of-the-art performance with a Chamfer Distance of 5.559 × 10–3 on Oakland 3D dataset, while reducing processing time by up to 80% compared to existing methods. On the challenging Oakland 3D dataset, ASFNet completes upsampling in just 150 seconds. These results underscore ASFNet’s potential to advance real-time 3D vision applications in areas such as autonomous navigation and augmented reality.
Loading