Abstract: Face recognition systems are widely used in different applications. In such systems, some features (called templates) are extracted from each face image and stored in the system’s database. In this paper, we propose an attack against face recognition systems where the adversary gains access to a portion of facial templates and aims to reconstruct the underlying face image. To this end, we train a face reconstruction network to invert partially leaked templates. In our experiments, we evaluate the vulnerability of state-of-the-art face recognition systems on different datasets, including MOBIO, LFW, and AgeDB. Our experiments demonstrate the vulnerability of face recognition systems to template inversion based on a portion of leaked templates. For example, with only 20% of facial templates, our experiments show that an adversary can achieve a success attack rate of 87% on a system based on ArcFace on the LFW dataset configured at the false match rate of 0.1%. To our knowledge, this paper is the first work on the inversion of partially leaked facial templates, and paves the way for future studies of attacks against face recognition systems based on partially leaked templates.
Loading