Abstract: Face recognition solutions became an essential part of our lives, some of which are deployed without the consent of its subjects. Analyzing the weaknesses of state of the art face recognition models, defense mechanisms can be developed by manipulating the underlying biometric information that such models depend on. We design a black-box adversarial attack on face recognition models with a simple UNet based generative model, transforming the biometric information to change the identification output. We propose a novel loss composition keeping the perceptual similarity but pushing the embedding similarity. We demonstrate the effectiveness of our approach on four face recognition models, decreasing their identification accuracy by an average of 76.34%. We also compare our approach to other attacks, conduct ablation studies, and experiment with untargeted and targeted attack settings. Overall, our approach evidences that the relationship between a face embedding and its facial recognition output is a fragile one which can be easily manipulated by a simple generative process.
Loading