A Consistency-Aware Spot-Guided Transformer for Versatile and Hierarchical Point Cloud Registration

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: Point cloud registration, Rigid transformation estimation, Feature matching, Correspondence, Deep learning
TL;DR: We propose a novel consistency-aware spot-guided Transformer for versatile and hierarchical point cloud registration, achieving state-of-the-art accuracy, efficiency, and robustness on both outdoor and indoor benchmarks.
Abstract: Deep learning-based feature matching has shown great superiority for point cloud registration in the absence of pose priors. Although coarse-to-fine matching approaches are prevalent, the coarse matching of existing methods is typically sparse and loose without consideration of geometric consistency, which makes the subsequent fine matching rely on ineffective optimal transport and hypothesis-and-selection methods for consistency. Therefore, these methods are neither efficient nor scalable for real-time applications such as odometry in robotics. To address these issues, we design a consistency-aware spot-guided Transformer (CAST), which incorporates a spot-guided cross-attention module to avoid interfering with irrelevant areas, and a consistency-aware self-attention module to enhance matching capabilities with geometrically consistent correspondences. Furthermore, a lightweight fine matching module for both sparse keypoints and dense features can estimate the transformation accurately. Extensive experiments on both outdoor LiDAR point cloud datasets and indoor RGBD point cloud datasets demonstrate that our method achieves state-of-the-art accuracy, efficiency, and robustness.
Supplementary Material: zip
Primary Area: Machine vision
Submission Number: 2740
Loading