MiniGPT-3D: Efficiently Aligning 3D Point Clouds with Large Language Models using 2D Priors

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large 2D vision-language models (2D-LLMs) have gained significant attention by bridging Large Language Models (LLMs) with images using a simple projector. Inspired by their success, large 3D point cloud-language models (3D-LLMs) also integrate point clouds into LLMs. However, directly aligning point clouds with LLM requires expensive training costs, typically in hundreds of GPU-hours on A100, which hinders the development of 3D-LLMs. In this paper, we introduce MiniGPT-3D, an efficient and powerful 3D-LLM that achieves multiple SOTA results while training for only 27 hours on one RTX 3090. Specifically, we propose to align 3D point clouds with LLMs using 2D priors from 2D-LLMs, which can leverage the similarity between 2D and 3D visual information. We introduce a novel four-stage training strategy for modality alignment in a cascaded way, and a mixture of query experts module to adaptively aggregate features with high efficiency. Moreover, we utilize parameter-efficient fine-tuning methods LoRA and Norm fine-tuning, resulting in only 47.8M learnable parameters, which is up to 260x fewer than existing methods. Extensive experiments show that MiniGPT-3D achieves SOTA on 3D object classification and captioning tasks, with significantly cheaper training costs. Notably, MiniGPT-3D gains an 8.12 increase on GPT-4 evaluation score for the challenging object captioning task compared to ShapeLLM-13B, while the latter costs 160 total GPU-hours on 8 A800. We are the first to explore the efficient 3D-LLM, offering new insights to the community. We will release the code and weights after review.
Primary Subject Area: [Generation] Multimedia Foundation Models
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: We introduce an efficient multimodal 3D-LLM, MiniGPT-3D, which achieves multiple SOTA results on 3D object classification and 3D object captioning tasks. Existing 3D-LLMs are directly built upon LLM and 3D point encoders, which require expensive vision-language alignment costs, typically in hundreds of GPU-hours on A100. However, we propose to align the 3D point clouds with LLM using 2D priors from 2D-LLM, reducing the training cost to 27 hours on a single NVIDIA RTX 3090 GPU. MiniGPT-3D takes the first step in developing efficient multimodal 3D-LLM. We hope it can bring new insights into the progress of large 3D point cloud-language models.
Supplementary Material: zip
Submission Number: 3118
Loading