Abstract: DeepFakes have become capable of producing highly realistic fabricated faces, posing significant threats to personal privacy and social security. The uncontrolled spread of such forged content, especially when influential figures are targeted, could lead to catastrophic consequences for society. Existing defense algorithms primarily rely on passive detection and perturbation-based strategies but fall short in preventing the generation of DeepFake content. Inspired by the concept of backdoor attacks, this paper proposes IdentityLock, a Face Swapping defense algorithm based on backdoor strategy. It leverages the identities of protected individuals as triggers, embedding defense backdoors during the training process of face swapping models. For images of ordinary individuals, the backdoor model produces typical face-swapped outputs. However, when processing images of protected individuals, the model outputs the original target images, thereby safeguarding protected identities. Extensive experiments on the VGGFace2 dataset demonstrate the superior protection performance and robustness of IdentityLock, which effectively shields protected individuals without additional information.
Loading