Abstract: The interest in matching non-rigidly deformed shapes represented as raw point clouds is rising due to the prolif-eration of low-cost 3D sensors. Yet, the task is challenging since point clouds are irregular and there is a lack of in-trinsic shape information. We propose to tackle these chal-lenges by learning a new shape representation - a per-point high dimensional embedding, in an embedding space where semantically similar points share similar embeddings. The learned embedding has multiple beneficial properties: it is aware of the underlying shape geometry and is robust to shape deformations and various shape artefacts, such as noise and partiality. Consequently, this embedding can be directly employed to retrieve high-quality dense corre-spondences through a simple nearest neighbor search in the embedding space. Extensive experiments demonstrate new state-of-the-art results and robustness in numerous chal-lenging non-rigid shape matching benchmarks and show its great potential in other shape analysis tasks, such as seg-mentation.
External IDs:dblp:conf/3dim/ZengGC25
Loading