Tailor-Made Face Privacy Protection via Class-Wise Targeted Universal Adversarial Perturbations

Published: 2025, Last Modified: 29 Sept 2025IEEE Trans. Dependable Secur. Comput. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The widespread application of face recognition poses unprecedented threats to individual privacy, as face images can be easily and stealthily analyzed. Efforts have been made to employ adversarial perturbations to disrupt the automatic inference of unauthorized face recognition systems. However, existing schemes fail to satisfy the personalized protection requirements of individuals, which may diminish the user experience. In this article, we propose a novel scheme that provides tailor-made face privacy protection for individuals via class-wise targeted universal adversarial perturbations (CT-UAPs). In our scheme, each individual can utilize a user-specific CT-UAP to exclusively generate protected faces whose identification outputs are a virtual identity predefined by themselves. For the generation of CT-UAPs, we develop an optimization-based method that guides the feature vectors of the protected faces to approach the class-wise feature space of the predefined virtual identity while simultaneously approaching that of the original identity. Extensive experiment results demonstrate the effectiveness of our scheme against five face recognition models. In addition, the interpretability of CT-UAPs is highlighted by the experimental results obtained through two-dimensional principal component analysis.
Loading