SAGE: Spatial-visual Adaptive Graph Exploration for Efficient Visual Place Recognition

Published: 26 Jan 2026, Last Modified: 16 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Visual Place Recognition, Geo-Localization, Graph Mining, Global Descriptor, Image Representation
TL;DR: A novel Visual Place Recognition approach incorporating graph mining and local feature enhancement, achieving SoTA with parameter efficiency fine-tuning and compact descriptors.
Abstract: Visual Place Recognition (VPR) requires robust retrieval of geotagged images despite large appearance, viewpoint, and environmental variation. Prior methods focus on descriptor fine-tuning or fixed sampling strategies yet neglect the dynamic interplay between spatial context and visual similarity during training. We present SAGE ($\underline{S}$patial-visual $\underline{A}$daptive $\underline{G}$raph $\underline{E}$xploration), a unified training pipeline that enhances granular spatial–visual discrimination by jointly improving local feature aggregation, organize samples during training, and hard sample mining. We introduce a lightweight Soft Probing module that learns residual weights from training data for patch descriptors before bilinear aggregation, boosting distinctive local cues. During training we reconstruct an online geo–visual graph that fuses geographic proximity and current visual similarity so that candidate neighborhoods reflect the evolving embedding landscape. To concentrate learning on the most informative place neighborhoods, we seed clusters from high-affinity anchors and iteratively expand them with a greedy weighted clique expansion sampler. Implemented with a frozen DINOv2 backbone and parameter-efficient fine-tuning, SAGE achieves SOTA across eight benchmarks. Notably, our method obtains 100\% Recall@10 on SPED only using 4096D global descriptors. The code and model are available at https://github.com/chenshunpeng/SAGE.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 2622
Loading