Abstract: Point cloud fusion is a significant process in many computer vision applications such as multi-robot maps building or multi-SLAM. When robots capture maps in different places, the maps suffer from large angle difference between respective viewpoints of the robots and spatial misalignment. These problems bring great challenges to point cloud fusion. In order to overcome such difficulty, in this study, we propose a 3D viewpoint calibration method. The method relies on the novel combination of 3D-SIFT (scale-invariant feature transform) keypoints and FPFH (fast point feature histogram) features, which are used for feature matching. Based on the feature correspondences, we compute the transformation to resolve the difference of the viewpoints. The experiments demonstrate that (1) the keypoints and features used in our method are distinctive and robust to camera viewpoint change; and (2) using viewpoint calibration can reduce the iterations of registration algorithms, and transforming different viewpoints to a close one could save computation time and improve accuracy.
Loading