Abstract: The autoregressive context model has been proven effective in point cloud attribute compression. However, it suffers from unbearable decoding latency due to the limitations of serial decoding and the large scale of point clouds. In this paper, we propose a rich, parallelizable context model for point cloud attribute compression to speed up the decoding process. To further improve rate-distortion (RD) performance, we propose cross-coordinate and intra-coordinate attention modules to reduce the spatial redundancy of the latent representations. We validate our method on the large-scale Moving Picture Experts Group (MPEG) point cloud benchmarks, and demonstrate that our model achieves much lower decoding time than previous autoregression-based methods while maintaining similar RD performance.
Loading