Abstract: Backdoor attacks pose a serious threat to the security of face recognition systems. These involve the insertion of poisoned inputs into the training data to manipulate the model's behavior at inference time and can cause severe consequences, such as unauthorized access to secure systems or impersonation of legitimate users. Previous works on backdoor attacks have primarily focused on closed-set classification systems. However, open-set face recognition systems are commonly utilized in practical applications, which operate fundamentally differently from closed-set systems. In this paper, we propose two main contributions. First, we demonstrate that closed-set backdoor attacks are effective in basic classification scenarios but fail to perform well in complex open-set face recognition tasks. Second, we introduce Feature Stabilized Trigger Loss (FSTL), a novel loss function designed to facilitate the learning of backdoors in open-set recognition models. The experiments were conducted on two large-scale datasets using a variety of high-performing face recognition systems and by training with both physical and digital triggers. Since developing effective attack countermeasures requires knowledge of effective attacks, this work will enable future works on developing more secure recognition systems.
Loading