Complete Structure Guided Point Cloud Completion via Cluster- and Instance-Level Contrastive Learning

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 spotlightEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: 3D computer Vision, Point cloud, self-supervised point cloud completion, contrastive Learning
TL;DR: We employ contrastive learning to extract complete point cloud structures from partial (incomplete) point clouds for guiding point cloud completion, achieving state-of-the-art (SOTA) results in the field of self-supervised point cloud completion.
Abstract: Point cloud completion, aiming to reconstruct missing part from incomplete point clouds, is a pivotal task in 3D computer vision. Traditional supervised approaches often necessitate complete point clouds for training supervision, which are not readily accessible in real-world applications. Recent studies have attempted to mitigate this dependency by employing self-supervise mechanisms. However, these approaches frequently yield suboptimal results due to the absence of complete structure in the point cloud data during training. To address these issues, in this paper, we propose an effective framework to complete the point cloud under the guidance of self learned complete structure. A key contribution of our work is the development of a novel self-supervised complete structure reconstruction module, which can learn the complete structure explicitly from incomplete point clouds and thus eliminate the reliance on training data from complete point clouds. Additionally, we introduce a contrastive learning approach at both the cluster- and instance-level to extract shape features guided by the complete structure and to capture style features, respectively. This dual-level learning design ensures that the generated point clouds are both shape-completed and detail-preserving. Extensive experiments on both synthetic and real-world datasets demonstrate that our approach significantly outperforms state-of-the-art self-supervised methods.
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 26655
Loading