Investigating Self-Supervised Representations for Audio-Visual Deepfake Detection

ICLR 2026 Conference Submission16992 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deepfake, deepfake detection, audio-visual, self-supervised representations, video forensics
TL;DR: Self-supervised features capture useful and complementary patterns for audio-visual deepfake detection, yet struggle to generalize across datasets, more likely due to failing to capture broader artifacts rather than relying on spurious correlations.
Abstract: Self-supervised representations excel at many vision and speech tasks, but their potential for audio-visual deepfake detection remains underexplored. Unlike prior work that uses these features in isolation or buried within complex architectures, we systematically evaluate them across modalities (audio, video, multimodal) and domains (lip movements, generic visual content). We assess three key dimensions: detection effectiveness, interpretability of encoded information, and cross-modal complementarity. We find that most self-supervised features capture deepfake-relevant information, and that this information is complementary. Moreover, the models attend to semantically meaningful regions rather than spurious artifacts. Yet none generalize reliably across datasets. This generalization failure likely stems from dataset characteristics, not from the features themselves latching onto superficial patterns. These results expose both the promise and fundamental challenges of self-supervised representations for deepfake detection: while they learn meaningful patterns, achieving robust cross-domain performance remains elusive.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 16992
Loading