- Keywords: Geometric Shape Assembly, Shape Matching, Pose Estimation, Implicit Representations
- Abstract: Learning to autonomously assemble shapes is a crucial skill for many robotic applications. Whereas the majority of existing part assembly methods focus on correctly posing semantic parts to recreate a whole object, we interpret assembly more literally: as mating geometric parts together to achieve a snug fit. By focusing on shape alignment, rather than semantic cues, we can achieve across category generalization and scaling. In this paper, we introduce a novel task, pairwise 3D geometric shape assembly, and propose Neural Shape Mating (NSM) to tackle this problem. Given point clouds of two object parts, NSM learns to reason about their geometric structure and fit in order to predict a pair of 3D poses that tightly mate them together. In addition, we couple the training of NSM with an implicit shape reconstruction task, making NSM more robust to imperfect point cloud observations. To train NSM, we present a self-supervised data collection pipeline that generates pairwise shape assembly data with ground truth by randomly cutting an object mesh into two parts, resulting in a dataset that consists of 19,226 shape assembly pairs with numerous object meshes and diverse cut types. We train NSM on the collected dataset and compare it with several point cloud registration methods and one part assembly baseline approach. Extensive experimental results and ablation studies under various settings demonstrate the effectiveness of the proposed algorithm. Additional material available at: neural-shape-mating.github.io.
- One-sentence Summary: Neural Shape Mating (NSM) pairwise 3D geometric shape assembly problem without using part or shape semantics and without goal conditioning.