Abstract: Face recognition systems (FRSs) typically store databases of discriminative real-valued template vectors, which are extracted from each enrolled user’s facial image(s). Such template databases must be carefully protected for user privacy—indeed, the dangers of template leakages have been widely reported in the literature. In contrast, the similarity scores between queried images and enrolled users is often unprotected and can be readily queried through typical FRS APIs. Such scores provide a potential avenue of adversarial attack on FRSs, but recently proposed score-based attacks remain largely impractical because they essentially rely on trial-and-error strategies that use an enormous number of adaptive queries (>50K) for face reconstruction.We present the first practical score-based face reconstruction and impersonation attack against three commercial FRS APIs: AWS CompareFaces, FACE++, and KAIROS, as well as five commonly used pre-trained open-source FRSs. Our attack is carried out in the black-box FRS model, where the adversary has no knowledge of the FRS (underlying models, parameters, template databases, etc.), except for the ability to make a limited number of similarity score queries. Notably, the attack is straightforward to implement, requires no trial-and-error guessing, and uses a small number of nonadaptive score queries. We motivate the attack by analyzing the topological meaning of similarity scores and then present our novel method using orthogonal face sets: a precomputed approximate basis set of human-like face images that enables us to get meaningful similarity scores from a small number of non-adaptive queries. Our approach successfully reconstructs human-like impersonation images with >20% (resp. >96%) success rates across three test datasets when directly attacking the AWS CompareFaces API (resp. open-source CosFace FRS) using only 100 queries—up to two orders of magnitude fewer queries than previous approaches. We provide evidence that personally identifiable biometric features are captured in our reconstructions by evaluating our approach in transfer-like attack settings and through other image similarity metrics.
Loading