Abstract: Unpaired point cloud completion involves filling in missing parts of a point cloud without requiring partial-complete correspondence. Meanwhile, since point cloud completion is an ill-posed problem, there are multiple ways to generate the missing parts. Existing GAN-based methods transform partial shape encoding into a complete one in the low-dimensional latent feature space. However, “mode collapse” often occurs, where only a subset of the shapes is represented in the low-dimensional space, reducing the diversity of the generated shapes. In this paper, we propose a novel unpaired multimodal shape completion approach that directly operates on point coordinate space. We achieve unpaired completion via an unconditional diffusion model trained on complete data by “hijacking” the generative process. We further augment the diffusion model by introducing two guidance mechanisms to help map the partial point cloud to the complete one while preserving its original structure. We conduct extensive evaluations of our approach, which show that our method generates shapes that are more diverse and better preserve the original structures compared to alternative methods.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Generation] Generative Multimedia
Relevance To Conference: Point cloud completion is related to 3D modal data processing, which plays an important role in many real world applications such as VR/AR. In this paper, we propose a novel method which could greately improve the diversity of the completion results compared to previous methods. There are also several point cloud completion papers published in ACM MM conference:
[1] ASFM-Net: Asymmetrical Siamese Feature Matching Network for Point Completion. MM'21
[2] Point Cloud Completion via Multi-Scale Edge Convolution and Attention. MM'22
[3] SD-Net: Spatially-Disentangled Point Cloud Completion Network. MM'23
Supplementary Material: zip
Submission Number: 1162
Loading