Query-Based External Information Leakage Attacks on Face Recognition Models

Published: 01 Jan 2024, Last Modified: 05 Mar 2025IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent studies have demonstrated external information leakage (EIL) attacks which allow an attacker to infer various sensitive implicit properties related to a machine learning (ML) model’s training data. Most of those attacks assumed 1) a white-box scenario in which the attacker has complete access to the ML model, its structure, and its parameters, or 2) a black-box (alternatively gray-box) scenario with non-realistic requirements such as a high query budget or high computational resources for the attacker. In this paper, we propose two practical query-based (i.e., black-box) EIL attacks that target face recognition ML models and allow an attacker to infer sensitive implicit properties, such as the facial characteristics, gender, ethnicity, income level, and average age of the individuals in the training data, with a limited number of queries. The first proposed attack, referred to as the random noise injection (RNI) attack, exploits the effect of injecting random noise into input samples on the target model’s predictions. The second proposed attack, referred to as the property substitute model (PSM) attack, creates a substitute model for each property value examined, whose predictions are compared to the target model’s predictions. Our comprehensive evaluation (a total of 730 experiments) performed on the CelebA dataset shows that the proposed attacks outperform existing EIL attacks and successfully infer private information, posing a threat to the privacy and security of the face recognition models.
Loading