Keywords: Deep Metric Learning, Visual Similarity Learning, Attention
Abstract: Deep metric learning (DML) provides rich measures of content-based visual similarity, which have become an essential component for many downstream tasks in computer vision and beyond. This paper questions a central paradigm of DML, the process of embedding individual images before comparing their embedding vectors. The embedding drastically reduces image information, removing all spatial information and pooling local image characteristics into a holistic representation. But how can we determine for an individual image the characteristics that would render it similar to a particular other image without having seen the other one? Rather than aiming for the least common denominator and requiring a common embedding space for all training images, our approach identifies for each pair of input images the locations and features that should be considered to compare them. We follow a cross-attention approach to determine these meaningful local features in one image by measuring their correspondences to the other image. Overall image similarity is then a non-linear aggregation of these meaningful local comparisons. The experimental evaluation on standard DML benchmarks shows this approach to significantly improve over the state of the art.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies
Loading