Keywords: Deepfake Detection, Visual Language Models, Zero-Shot Learning
TL;DR: We propose a novel method for zero-shot deepfake detection using VLMs which achieves superior results on a new deepfake dataset and an old baseline
Abstract: The contemporary phenomenon of deepfakes, utilising GAN or diffusion models for face swapping, presents a substantial and evolving threat in digital media, identity verification, and a multitude of other systems.
The majority of existing methods for detecting deepfakes rely on training specialised classifiers to distinguish between genuine and manipulated images, focusing only on the image domain without incorporating any auxiliary tasks that could enhance robustness. \
In this paper, inspired by the zero-shot capabilities of Vision-Language Models (VLMs), we propose a novel approach using VLMs to identify deepfakes. We introduce a new, high-quality deepfake dataset comprising 60,000 images, on which zero-shot VLMs demonstrate superior performance to almost all existing methods. Subsequently, we compare the performance of the best-performing VLM, InstructBLIP, on the popular deepfake dataset DFDC-P against traditional methods in two scenarios: zero-shot and in-domain fine-tuning. Our results demonstrate the superiority of VLMs over traditional classifiers.
Submission Number: 111
Loading