Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-RankingDownload PDFOpen Website

31 Aug 2020 (modified: 11 Jul 2022)OpenReview Archive Direct UploadReaders: Everyone
Abstract: The success of DNNs has driven the extensive appli- cations of person re-identification (ReID) into a new era. However, whether ReID inherits the vulnerability of DNNs remains unexplored. To examine the robustness of ReID systems is rather important because the insecurity of ReID systems may cause severe losses, e.g., the criminals may use the adversarial perturbations to cheat the CCTV systems. In this work, we examine the insecurity of current best- performing ReID models by proposing a learning-to-mis- rank formulation to perturb the ranking of the system out- put. As the cross-dataset transferability is crucial in the ReID domain, we also perform a back-box attack by devel- oping a novel multi-stage network architecture that pyra- mids the features of different levels to extract general and transferable features for the adversarial perturbations. Our method can control the number of malicious pixels by using differentiable multi-shot sampling. To guarantee the incon- spicuousness of the attack, we also propose a new percep- tion loss to achieve better visual quality. Extensive experiments on four of the largest ReID benchmarks (i.e., Market1501 [45], CUHK03 [17], DukeMTMC [33], and MSMT17 [40]) not only show the effectiveness of our method, but also provides directions of the future improvement in the robustness of ReID systems. For example, the accuracy of one of the best-performing ReID systems drops sharply from 91.8% to 1.4% after being attacked by our method. Some attack results are shown in Fig. 1. The code is available at https://github. com/whj363636/Adversarial-attack-on- Person-ReID-With-Deep-Mis-Ranking.
0 Replies

Loading