Adaptive Camera Margin for Mask-guided Domain Adaptive Person Re-identificationOpen Website

Published: 01 Jan 2022, Last Modified: 05 Nov 2023ACM Multimedia 2022Readers: Everyone
Abstract: Research on transferring the learned person re-identification (ReID) model in the source domain to other domains is of great importance since deploying a ReID model to a new scenario is common in practical applications. Most of existing unsupervised domain adaptation methods for person ReID employ the framework of pre-training in the source domain, and clustering and fine-tuning in the target domain. However, how to reduce the intra-domain variations and narrow the inter-domain gaps is far from solved and remains a challenging problem under this framework. In this paper, we address these issues from two aspects. Firstly, a voted-mask guided image channel shuffling strategy for data augmentation is proposed to enhance visual diversity, where image channel shuffling is used as an efficient tool to bridge the inter-domain gap, and voted masks are employed to extract the foregrounds of pedestrian images to relief the negative effects of various backgrounds for reducing the intra-domain variations. Secondly, a novel plug-and-play metric named adaptive camera margin is proposed to fully exploit the low-cost camera tags for producing high-quality pseudo labels, which can significantly reduce the intra-domain variations without extra training cost. Specifically, the proposed network consists of a sensitive branch and an adaptive branch accompanied with our strategy of data augmentation, which are embedded into a joint learning framework to decouple visual representations for better capturing transferable features across different domains in both two stages. Adaptive camera margin is employed to pull samples with different camera IDs closer in the procedure of DBSCAN clustering, which can reduce the influence of intra-domain variations caused by camera shift to a large extent in an effective and efficient manner. Comprehensive experiments show that the proposed method achieves competitive performance compared with state-of-the-art methods on the benchmark datasets. Source code will be released at: https://github.com/ahuwangrui/MACM.
0 Replies

Loading