Abstract: We address the problem of person re-identification (reID), that is, retrieving person
images from a large dataset, given a query image of the person of interest. A
key challenge is to learn person representations robust to intra-class variations, as
different persons can have the same attribute and the same person’s appearance
looks different with viewpoint changes. Recent reID methods focus on learning discriminative features but robust to only a particular factor of variations (e.g., human
pose), which requires corresponding supervisory signals (e.g., pose annotations).
To tackle this problem, we propose to disentangle identity-related and -unrelated
features from person images. Identity-related features contain information useful
for specifying a particular person (e.g., clothing), while identity-unrelated ones
hold other factors (e.g., human pose, scale changes). To this end, we introduce a
new generative adversarial network, dubbed identity shuffle GAN (IS-GAN), that
factorizes these features using identification labels without any auxiliary information. We also propose an identity-shuffling technique to regularize the disentangled
features. Experimental results demonstrate the effectiveness of IS-GAN, significantly outperforming the state of the art on standard reID benchmarks including the
Market-1501, CUHK03 and DukeMTMC-reID. Our code and models are available
online: https://cvlab-yonsei.github.io/projects/ISGAN/.
0 Replies
Loading