Abstract: In autonomous driving, the LiDAR-Camera system plays a crucial role in a vehicle’s perception of 3D environments. To effectively fuse information from both camera and LiDAR, extrinsic calibration is indispensable. Recently, some researchers have proposed deep learning-based methods that utilize convolutional networks to automatically extract features from LiDAR depth images and RGB images for calibration. However, these features do not sufficiently interact during feature matching, which limits the calibration accuracy. To this end, we introduce a novel extrinsic calibration network (HIFM-Net) in this paper. It establishes a comprehensive connection between camera and LiDAR features by calculating a globally-aware map-to-map cost volume and hierachical point-to-map cost volumes. The former is used to regress large extrinsic offsets. The latter is employed to iteratively fine-tune extrinsic parameters, while the rigidity of LiDAR points is considered in each iteration to enhance regression robustness. Extensive experiments on the KITTI-odometry dataset demonstrate the superior performance of our HIFMNet compared to other state-of-the-art learning-based methods.
Loading