PVTransformer: Point-to-Voxel Transformer for Scalable 3D Object Detection

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICRA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: 3D object detectors for point clouds often rely on a pooling-based PointNet [20] to encode sparse points into grid-like voxels or pillars. In this paper, we identify that the common PointNet design introduces an information bottleneck that limits 3D object detection accuracy and scalability. To address this limitation, we propose PVTransformer: a transformer-based point-to-voxel architecture for 3D detection. Our key idea is to replace the PointNet pooling operation with an attention module, leading to a better point-to-voxel aggregation function. Our design respects the permutation invariance of sparse 3D points while being more expressive than the pooling-based PointNet. Experimental results show our PVTransformer achieves much better performance compared to the latest 3D object detectors. On the widely used Waymo Open Dataset, our PVTransformer achieves state-of-the-art 76.5 mAPH L2, outperforming the prior art of SWFormer [27] by +1.7 mAPH L2.
Loading