Point-GCC: Universal Self-supervised 3D Scene Pre-training via Geometry-Color Contrast

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Geometry and color information provided by the point clouds are both crucial for 3D scene understanding. Two pieces of information characterize the different aspects of point clouds, but existing methods lack an elaborate design for the discrimination and relevance. Hence we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Specifically, we propose a universal 3D scene pre-training framework via Geometry-Color Contrast (Point-GCC), which aligns geometry and color information using a Siamese network. To take care of actual application tasks, we design (i) hierarchical supervision with point-level contrast and reconstruct and object-level contrast based on the novel deep clustering module to close the gap between pre-training and downstream tasks; (ii) architecture-agnostic backbone to adapt for various downstream models. Benefiting from the object-level representation associated with downstream tasks, Point-GCC can directly evaluate model performance and the result demonstrates the effectiveness of our methods. Transfer learning results on a wide range of tasks also show consistent improvements across all datasets. e.g., new state-of-the-art object detection results on SUN RGB-D and S3DIS datasets. Codes will be released on Github.
Primary Subject Area: [Content] Multimodal Fusion
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: Indoor point clouds typically encompass both color and geometric information. While color data derives from RGB, corresponding to the 2D visual modality, geometric data derives from depth sensors, on behalf of the pure 3D visual modality. Existing approaches include tailoring solely to the 2D or 3D visual modality and simultaneously utilizing both. However, within networks exploiting both modalities, existing methods lack an elaborate design for discrimination and relevance. We argue that directly concatenating all modal information can not adapt the model to learn different aspects of point clouds discriminately. Hence, we explore a 3D self-supervised paradigm that can better utilize the relations of point cloud information. Extensive experiments show that Point-GCC significantly improves performance on various downstream tasks, especially achieving new state-of-the-art results on multiple datasets. The results prove that our method enhances the understanding of diverse modal information in point cloud models.
Supplementary Material: zip
Submission Number: 3494
Loading