Abstract: Parallel LiDAR is a novel framework for constructing next-generation intelligent LiDAR systems. 3D object detection serves as a common perception task in parallel LiDAR research. However, current approaches heavily rely on CNNs or Transformers for feature interaction, placing high demands on computational resources and memory. We introduce the evolving Mamba architecture into 3D object detection to address these issues and propose a new PillarMamba network. PillarMamba utilizes a pure Mamba-based backbone composed of multiple stacked BEVMamba blocks. Our experiments on the KITTI dataset demonstrate that PillarMamba can achieve 65.41% mAP in BEV and 59.13% mAP in 3D perspectives. It paves the way for constructing more efficient and accurate detection models, holding significant value for practical applications.
Loading