BEVCalib: LiDAR-Camera Calibration via Geometry-Guided Bird’s-Eye View Representation

Published: 08 Aug 2025, Last Modified: 16 Sept 2025CoRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LiDAR-Camera Calibration, Autonomous Driving, BEV Features
TL;DR: BEVCalib, the first model using bird's-eye view(BEV) features to perform LiDAR-camera calibration from raw data.
Abstract: Accurate LiDAR-camera calibration is the foundation of accurate multimodal fusion environmental perception for autonomous driving and robotic systems. Traditional calibration methods require extensive data collection in controlled environments and cannot compensate for the transformation changes during the vehicle/robot movement. In this paper, we propose the first model that uses bird's-eye view (BEV) features to perform LiDAR camera calibration from raw data, termed BEVCalib. To achieve this, we extract camera BEV features and LiDAR BEV features separately and fuse them into a shared BEV feature space. To fully utilize the geometry information from the BEV feature, we introduce a novel feature selector to choose the most important feature in the transformation decoder, which reduces memory consumption and enables efficient training. Extensive evaluations in various datasets demonstrate that BEVCalib establishes a new state-of-the-art; improving the best open-source baseline by two orders of magnitude on KITTI, Nuscenes, and our dynamic extrinsic dataset, respectively, and outperforming the best baseline in literature by 72% on KITTI dataset, and 69% on Nuscenes dataset. All source code and checkpoints will be released.
Spotlight: mp4
Submission Number: 226
Loading