Embedding Backdoors as the Facial Features: Invisible Backdoor Attacks Against Face Recognition Systems
Abstract: Deep neural network (DNN) based face recognition systems have been widely applied in various identity authentication scenarios. However, recent studies show that the DNN models are vulnerable to backdoor attacks. An attacker can embed backdoors into the neural network by modifying its internal structure or poisoning the training set. In this way, the attacker can login into the system as the victim, while the normal use of the system by legitimate users will not be affected. However, the backdoors used in existing attacks are visually perceptible (black-frame glasses or purple sunglasses), which will arouse humans' suspicions thus lead to the failure of the attacks. In this paper, we propose a novel backdoor attack method, BHF2 (Backdoor Hidden as Facial Features), where the attacker can embed the backdoors as the inherent facial features. The proposed method can greatly enhance the concealment of the injected backdoor, which makes the backdoor attacks more difficult to be discovered. Besides, the BHF2 method can be launched under the black-box conditions, where the attacker is completely unaware of the target face recognition system. The proposed backdoor attack method can be applied in those rigorous identity authentication scenarios where the users are not allowed to wear any accessories. Experimental results show that the BHF2 method can achieve high attack success rate (up to 100%) on the state-of-the-art face recognition model, DeepID1, while the normal working performance of the system has hardly been affected (the recognition accuracy of the system has only dropped by 0.01% at the lowest).
Loading