MutualVPR: A Mutual Learning Framework for Resolving Supervision Inconsistencies via Adaptive Clustering
Keywords: Visual Place Recognition, Geo-localization,Autonomous Driving
TL;DR: A work on solving the problem of inconsistent supervision in visual place recognition
Abstract: Visual Place Recognition (VPR) enables robust localization through image retrieval based on learned descriptors.
However, drastic appearance variations of images at the same place caused by viewpoint changes can lead to inconsistent supervision signals, thereby degrading descriptor learning.
Existing methods either rely on manually defined cropping rules or labeled data for view differentiation, but they suffer from two major limitations:
(1) reliance on labels or handcrafted rules restricts generalization capability;
(2) even within the same view direction, occlusions can introduce feature ambiguity.
To address these issues, we propose MutualVPR, a mutual learning framework that integrates unsupervised view self-classification and descriptor learning.
We first group images by geographic coordinates, then iteratively refine the clusters using K-means to dynamically assign place categories without manual labeling.
Specifically, we adopt a DINOv2-based encoder to initialize the clustering.
During training, the encoder and clustering co-evolve, progressively separating drastic appearance variations of the same place and enabling consistent supervision.
Furthermore, we find that capturing fine-grained image differences at a place enhances robustness.
Experiments demonstrate that MutualVPR achieves state-of-the-art (SOTA) performance across multiple datasets, validating the effectiveness of our framework in improving view direction generalization, occlusion robustness.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 19412
Loading