Abstract: The rapid development of deepfake technology poses challenges to face-centered data security. Existing methods primarily focus on how to transfer deepfake detectors from the source domain to the target domain to handle diverse deepfake techniques. In practical application scenarios, it is usually difficult to access the true and false labels of the source domain. In this letter, we introduce a new adaptation framework called Latent Domain Knowledge Distillation (LDKD) for cross-domain deepfake detection. In the proposed framework, we construct a knowledge distillation structure that includes a student network and a teacher network, which are jointly optimized in a coupled manner to facilitate the model's adaptation to the target domain. Furthermore, to improve the quality of pseudo-labels generated by the teacher network, we propose a Fourier Latent Domain Generation Module (FLGM) and a Stochastic Complementary Mask Module (SCMM). The former is used to generate latent domains to bridge domain differences at the image level, while the latter is employed to mine richer contextual cues for the model. Extensive cross-domain experimental results demonstrate that our method achieves state-of-the-art performance, and the model analysis proves the effectiveness of our key components.
Loading