Transferable Attack against Face Swapping in an Extended Space

Published: 01 Jan 2025, Last Modified: 10 Nov 2025ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although deep Face Swapping (FS) models may benefit the entertainment industry, they pose severe threats to privacy and security. Existing protections, including deepfake detection and adversarial perturbation, are either passive responses or ineffective to unseen subject-agnostic FS models. In this paper, we propose a transferable attack against subject-agnostic FS models named Additive Identity attack based on a Relighting function (AIR). AIR leverages reillumination and additive perturbations to mislead the identity extraction modules in subject-agnostic FS models. By using these two types of perturbations simultaneously, the attack space is extended such that stronger but more visually natural adversarial examples can be identified. To further enhance the visual quality while preserving the effectiveness of the attack, an adaptive translation-invariant operation and an illumination control scheme are designed for AIR. Unlike other methods, AIR does not require a surrogate FS model to achieve high transferability. In addition, a mathematical proof is given for the extension of the attack space. Extensive experiments using 1000 image pairs across various state-of-the-art subject-agnostic FS models, including GAN and diffusion-based FS models, show that AIR surpasses all existing attacks in terms of both attack success rate and image quality.
Loading