Rethinking Impersonation and Dodging Attacks on Face Recognition Systems

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Face Recognition (FR) systems can be easily deceived by adversarial examples that manipulate benign face images through imperceptible perturbations. Adversarial attacks on FR encompass two types: impersonation (targeted) attacks and dodging (untargeted) attacks. Previous methods often achieve a successful impersonation attack on FR; However, it does not necessarily guarantee a successful dodging attack on FR in the black-box setting. In this paper, our key insight is that the generation of adversarial examples should perform both impersonation and dodging attacks simultaneously. To this end, we propose a novel attack method termed as Adversarial Pruning (Adv-Pruning), to fine-tune existing adversarial examples to enhance their dodging capabilities while preserving their impersonation capabilities. Adv-Pruning consists of Priming, Pruning, and Restoration stages. Concretely, we propose Adversarial Priority Quantification to measure the region-wise priority of original adversarial perturbations, identifying and releasing those with minimal impact on absolute model output variances. Then, Biased Gradient Adaptation is presented to adapt the adversarial examples to traverse the decision boundaries of both the attacker and victim by adding perturbations favoring dodging attacks on the vacated regions, preserving the prioritized features of the original perturbations while boosting dodging performance. As a result, we can maintain the impersonation capabilities of original adversarial examples while effectively enhancing dodging capabilities. Comprehensive experiments demonstrate the superiority of our method compared with state-of-the-art adversarial attacks.
Primary Subject Area: [Experience] Multimedia Applications
Secondary Subject Area: [Generation] Social Aspects of Generative AI
Relevance To Conference: Face images are crucial components of multimedia. Consequently, investigating adversarial attacks on FR is of great importance for bolstering security and privacy of multimedia processing. Our work contributes to multimedia processing by enhancing the robustness and reliability of face images in the multimedia sphere by exposing more blind spots. By identifying vulnerabilities in these multimedia systems using face images through adversarial attacks, we can develop more robust algorithms and techniques to improve the security and privacy of these multimedia systems. This contributes to the overall improvement of security and privacy in multimedia processing applications where face images are used, such as virtual makeup try-ons, photo tagging, and social media filters.
Supplementary Material: zip
Submission Number: 430
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview