LiDAR-Camera Extrinsic Calibration with Hierachical and Iterative Feature Matching

Published: 01 Jan 2024, Last Modified: 16 Apr 2025ICRA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In autonomous driving, the LiDAR-Camera system plays a crucial role in a vehicle’s perception of 3D environments. To effectively fuse information from both camera and LiDAR, extrinsic calibration is indispensable. Recently, some researchers have proposed deep learning-based methods that utilize convolutional networks to automatically extract features from LiDAR depth images and RGB images for calibration. However, these features do not sufficiently interact during feature matching, which limits the calibration accuracy. To this end, we introduce a novel extrinsic calibration network (HIFM-Net) in this paper. It establishes a comprehensive connection between camera and LiDAR features by calculating a globally-aware map-to-map cost volume and hierachical point-to-map cost volumes. The former is used to regress large extrinsic offsets. The latter is employed to iteratively fine-tune extrinsic parameters, while the rigidity of LiDAR points is considered in each iteration to enhance regression robustness. Extensive experiments on the KITTI-odometry dataset demonstrate the superior performance of our HIFMNet compared to other state-of-the-art learning-based methods.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview