Abstract: Existing Transformer-based models for point cloud analysis suffer from quadratic complexity, leading to compromised point cloud resolution and information loss. In contrast, the newly proposed Mamba model, based on state space models (SSM), outperforms Transformer in multiple areas with only linear complexity. However, the straightforward adoption of Mamba does not achieve satisfactory performance on point cloud tasks. In this work, we present Mamba3D, a state space model tailored for point cloud learning to enhance local feature extraction, achieving superior performance, high efficiency, and scalability potential. Specifically, we propose a simple yet effective Local Norm Pooling (LNP) block to extract local geometric features. Additionally, to obtain better global features, we introduce a bidirectional SSM (bi-SSM) with both a token forward SSM and a novel backward SSM that operates on the feature channel. Extensive experimental results show that Mamba3D surpasses Transformer-based counterparts and concurrent works in multiple tasks, with or without pre-training. Notably, Mamba3D achieves multiple SoTA, including an overall accuracy of 92.6% (train from scratch) on the ScanObjectNN and 95.1% (with single-modal pre-training) on the ModelNet40 classification task, with only linear complexity. We shall release the code and model upon publication of this work.
Primary Subject Area: [Content] Media Interpretation
Secondary Subject Area: [Generation] Multimedia Foundation Models, [Experience] Multimedia Applications
Relevance To Conference: This work introduces Mamba3D, a novel architecture for point cloud analysis, which significantly outperforms Transformer in both performance and efficiency, and continues to improve after pre-training. Mamba3D achieves multiple SOTA on downstream tasks with only linear complexity. Currently, most large-scale point cloud models, like PointLLM[1], ShapeLLM[2], and Point-Bind & Point-LLM[3], are multimodal and based on Transformer with quadratic complexity. In contrast, our proposed Mamba3D offers a promising new backbone for multimodal large-scale point cloud models with only linear complexity.
[1] Xu R, Wang X, Wang T, et al. PointLLM: Empowering Large Language Models to Understand Point Clouds[J]. arXiv preprint arXiv:2308.16911, 2023.
[2] Qi Z, Dong R, Zhang S, et al. ShapeLLM: Universal 3D Object Understanding for Embodied Interaction[J]. arXiv preprint arXiv:2402.17766, 2024.
[3] Guo Z, Zhang R, Zhu X, et al. Point-Bind & Point-LLM: Aligning Point Cloud with Multi-modality for 3D Understanding, Generation, and Instruction Following[J]. arXiv preprint arXiv:2309.00615, 2023.
Supplementary Material: zip
Submission Number: 2867
Loading