NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences
Abstract: Visual place recognition (VPR) is critical in not
only localization and mapping for autonomous driving vehicles, but also assistive navigation for the visually impaired
population. To enable a long-term VPR system on a large
scale, several challenges need to be addressed. First, different
applications could require different image view directions, such
as front views for self-driving cars while side views for the
low vision people. Second, VPR in metropolitan scenes can
often cause privacy concerns due to the imaging of pedestrian
and vehicle identity information, calling for the need for data
anonymization before VPR queries and database construction.
Both factors could lead to VPR performance variations that
are not well understood yet. To study their influences, we
present the NYU-VPR dataset that contains more than 200,000
images over a 2km×2km area near the New York University
campus, taken within the whole year of 2016. We present
benchmark results on several popular VPR algorithms showing
that side views are significantly more challenging for current
VPR methods while the influence of data anonymization is
almost negligible, together with our hypothetical explanations
and in-depth analysis.
Loading