Recognize Me If You Can: Two-stream Adversarial Transfer for Facial Privacy Protection using Fine-grained Makeup

Published: 01 Jan 2025, Last Modified: 27 Sept 2025Vis. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The popularity of social media brings about a surge in privacy risks, e.g., abusing face recognition (FR) systems for excessive surveillance. Adversarial attack techniques can prevent the unauthorized recognition of facial images at the cost of reduced visual quality. Recent attempts to integrate adversarial perturbations with makeup transfer demonstrate improved natural appearance. However, a conflict appears between visual quality and adversarial effectiveness in facial privacy protection. Existing works are merely focused on adjusting the adversarial network framework to achieve a rough balance. Instead, we conduct a theoretical analysis to break through this trade-off. We observe that identity-related features and makeup information occupy distinct frequency bands within an image. Based on this insight, we decompose the image and separate the two features. Therefore, we can transfer the makeup in a fine-grained manner independently from the adversarial generation. We decompose the image to separate these two sets of features. This enables a fine-grained makeup transfer, independent of the adversarial generation. Accordingly, we design a two-stream adversarial transfer network. Consequently, we successfully protect face privacy against malicious black-box FR with high transferability and visual quality. Extensive experiments demonstrate that our solution defends against two commercial APIs (i.e., Face++ and Aliyun) with little image quality degradation.
Loading