Abstract: Conventional point cloud registration methods usually employ an encoder-decoder architecture, where mid-level features are locally aggregated to extract geometric information. However, the over-reliance on local features may raise the boundary points cannot be adequately matched for two point clouds. To address this issue, we argue that the boundary features can be further enhanced by the rotation information, and propose a rotation invariant representation to replace common 3D Cartesian coordinates as the network inputs that enhances generalization to arbitrary orientations. Based on this technique, we propose rotation invariant Transformer for point cloud registration, which utilizes insensitivity to arrangement and quantity of data in the Transformer module to capture global structural knowledge within local parts for overall comprehension of each point clouds. Extensive quantitative and qualitative experimental on ModelNet40 evaluations show the effectiveness of the proposed method.
0 Replies
Loading