Dual Modal Meta Metric Learning for Attribute-Image Person Re-identificationDownload PDFOpen Website

2021 (modified: 10 Nov 2022)ICNSC 2021Readers: Everyone
Abstract: Attribute-image person re-identification (AIPR) aiming to retrieve persons from massive images via an attribute query is a meaningful but challenging cross-modal retrieval task. Because there is a huge modal difference between person images and attributes, e.g., on the image modal one subject usually contains of varying instances, but on the attribute modal, one subject only contains an explicit instance. Unlike most existing AIPR methods focusing on shrinking feature differences crossing modals, we propose a dual modal meta metric learning (DM <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> L) method for AIPR in this paper. Specifically, in each episode, we sample a subset as a new task and split the training data into a single-modal support set of person images and a dual modal query set consisting of both person images and attributes. Based on the single-modal support set and the dual modal query set, our DM <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> L learns not only attribute-image cross-modal metrics but also learns image-image intra-modal metrics. Therefore, our DM <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> L method encourages data on both attribute and image modalities are discriminate to improve AIPR. Experiments show that our DM <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">3</sup> L outperforms state-of-the-art approaches on Market-1501 Attribute and PETA datasets.
0 Replies

Loading