Unleashing the Power of Data Generation in One-Pass Outdoor LiDAR Localization (ACM MM2025)

Published: 04 Jul 2025, Last Modified: 18 Nov 2025OpenReview Archive Direct UploadEveryoneRevisionsCC BY 4.0
Abstract: Point cloud regression localization technology has a wide range of applications in the multimedia field. For example, in virtual reality and augmented reality, accurate point cloud localization can significantly enhance the user experience. Recently, point cloud pose regression algorithms based on APR (Absolute Pose Regression) and SCR (Scene Coordinate Regression) have achieved near sub-meter accuracy, requiring multiple repetitive trajectories for training. The key to their success lies in the diversity of viewpoints, temporal changes, and trajectories,which is resource-consuming. However, due to the errors in GPS/INS, the coupling between trajectories is not ideal, and the stability of re-localization is insufficient. Since LiDAR has covered most of the scene, single-shot localization has the potential to approach or even surpass multi-trajectory localization methods through pose enhancement. Specifically, we present Pose Enhancement Localization (PELoc), which feeds one trajectory, proposing SSDA (Single-shot Data Augmentation) and LTI (LiDAR Trajectories-coupled Interpolation) to simulate different driving poses, and we introduce KP-CL (Key Points Contrastive Learning) through feature perturbation to mitigate the differences in viewpoint/temporal phase transformations in similar scenes across different trajectories. Our algorithm has been tested on the Oxford, QE-Oxford, and NCLT datasets, where single-shot localization accuracy can approach near sub-meter level on QE-Oxford and NCLT. The code will be published in https://github.com/Eaton2022/PELoc.
Loading