Viewpoint calibration method based on point features for point cloud fusion

Published: 01 Jan 2017, Last Modified: 10 Apr 2025ICIP 2017EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Point cloud fusion is a significant process in many computer vision applications such as multi-robot maps building or multi-SLAM. When robots capture maps in different places, the maps suffer from large angle difference between respective viewpoints of the robots and spatial misalignment. These problems bring great challenges to point cloud fusion. In order to overcome such difficulty, in this study, we propose a 3D viewpoint calibration method. The method relies on the novel combination of 3D-SIFT (scale-invariant feature transform) keypoints and FPFH (fast point feature histogram) features, which are used for feature matching. Based on the feature correspondences, we compute the transformation to resolve the difference of the viewpoints. The experiments demonstrate that (1) the keypoints and features used in our method are distinctive and robust to camera viewpoint change; and (2) using viewpoint calibration can reduce the iterations of registration algorithms, and transforming different viewpoints to a close one could save computation time and improve accuracy.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview