NAOL: NeRF-Assisted Omnidirectional Localization

Published: 2024, Last Modified: 26 Jan 2026ICPR (4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Visual localization is a challenging task involving precise camera position and orientation estimation from an image. Specifically, existing panorama datasets suffer from a limitation in the number of available omnidirectional images, resulting in a sparse distribution and insufficient diversity compared to the well-distributed dataset. This makes omnidirectional localization hard for conventional feature-based visual localization algorithms. In this paper, we introduce a novel and efficient approach named NAOL, specifically designed to address the omnidirectional localization problem. Our proposed pipeline unfolds in two pivotal stages: stage one employs a visual-based algorithm for preliminary coarse pose estimation. Acknowledging the number of omnidirectional images is limited and the distribution is sparse, in stage two, we take an innovative step to augment the dataset and refine the coarse pose estimations by stage one. We introduce the Depth-supervised Panorama Neural Radiance Fields (DP-NeRF), a novel approach designed for training on a single omnidirectional image with depth, thereby enriching the dataset while enabling an iterative algorithm grounded in DP-NeRF to enhance localization accuracy. Our experimental results validate the efficacy of the NAOL algorithm in performing visual localization in scenarios with sparse panorama datasets.
Loading