Keywords: implicit neural representation, point cloud compression
Abstract: Efficiently compressing and transmitting large-scale high-fidelity 3D point clouds is a critical bottleneck for practical applications. We introduce a novel framework that reformulates point cloud compression as model compression. Our framework models high-fidelity point cloud geometry and attribute with compact implicit neural representations (INR) separately and then compresses the model parameters directly via quantization and entropy coding, decoupling representation from compression. To ensure this neural representation is both faithful and efficient, we employ Kolmogorov-Arnold Network (KAN) as the INR backbone. Thanks to its superior approximation properties and parameter efficiency, KAN can easily capture fine-grained details missed by traditional MLP. Extensive evaluations on datasets such as KITTI, ScanNet, and 8iVFB demonstrate that our method significantly outperforms the MPEG standard and prior implicit neural representation approaches. Notably, it achieves competitive rate-distortion performance against state-of-the-art deep learning codecs. Our findings establish implicit neural compression as a powerful and practical pathway for developing the next generation of high-efficiency point cloud codecs.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 6638
Loading