Abstract: A novel camera autocalibration method is presented. Any camera model can be calibrated, and no calibration targets like checkerboards are used. The method requires the camera to be mounted on a lidar-equipped moving platform travelling through a structured environment along a known path.
The primary reason for cross-modal camera calibration is not to solve the sensor fusion problem, but to tap the huge amount of accurate metric data points available from the lidar. The amount of measurements is easily four orders of magnitude higher than in checkerboard based methods. This leads to improved estimation accuracy, especially of higher-order distortion coefficients.
In a multi-camera setup, the lidar additionally defines a common reference coordinate system for all cameras.
Compared to the majority of published methods on camera-lidar autocalibration, (i) our calibration procedure relies on motion features, (ii) the hard-to-obtain-accurately lidar-lidar and lidar-image feature correspondences are not required, and (iii) both camera extrinsics and intrinsics, including complex distortion models, are autocalibrated.
Experiments show that the calibration accuracy reaches or exceeds the accuracy of methods relying on calibration targets.
Loading