AdversariaLeak: External Information Leakage Attack Using Adversarial Samples on Face Recognition Systems

Published: 01 Jan 2024, Last Modified: 05 Mar 2025ECCV (75) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Face recognition (FR) systems are vulnerable to external information leakage (EIL) attacks, which can reveal sensitive information about the training data, thus compromising the confidentiality of the company’s proprietary and the privacy of the individuals concerned. Existing EIL attacks mainly rely on unrealistic assumptions, such as a high query budget for the attacker and massive computational power, resulting in impractical EIL attacks. We present AdversariaLeak, a novel and practical query-based EIL attack that targets the face verification model of the FR systems by using carefully selected adversarial samples. AdversariaLeakuses substitute models to craft adversarial samples, which are then handpicked to infer sensitive information. Our extensive evaluation on the MAAD-Face and CelebA datasets, which includes over 200 different target models, shows that AdversariaLeakoutperforms state-of-the-art EIL attacks in inferring the property that best characterizes the FR model’s training set while maintaining a small query budget and practical attacker assumptions.
Loading