Unlocking the Capabilities of Large Vision-Language Models for Generalizable and Explainable Deepfake Detection

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: use LVLM for deepfake detection
Abstract: Current Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in understanding multimodal data, but their potential remains underexplored for deepfake detection due to the misalignment of their knowledge and forensics patterns. To this end, we present a novel framework that unlocks LVLMs' potential capabilities for deepfake detection. Our framework includes a Knowledge-guided Forgery Detector (KFD), a Forgery Prompt Learner (FPL), and a Large Language Model (LLM). The KFD is used to calculate correlations between image features and pristine/deepfake image description embeddings, enabling forgery classification and localization. The outputs of the KFD are subsequently processed by the Forgery Prompt Learner to construct fine-grained forgery prompt embeddings. These embeddings, along with visual and question prompt embeddings, are fed into the LLM to generate textual detection responses. Extensive experiments on multiple benchmarks, including FF++, CDF2, DFD, DFDCP, DFDC, and DF40, demonstrate that our scheme surpasses state-of-the-art methods in generalization performance, while also supporting multi-turn dialogue capabilities.
Lay Summary: We explore the following question: large visual-language large models, having been trained on massive datasets and thus understanding a wide variety of objects, might they be harnessed both to detect deepfakes accurately and to leverage their question-answering capabilities for improved interpretability? To this end, we first align image and text representations via a visual–language model to enable precise forgery classification and localization. We then feed the resulting forgery embeddings into a large language model and fine-tune it to perform deepfake detection within a question-answering framework. Our approach not only improves detection accuracy on previously unseen manipulated images but also, through its interactive Q&A interface, significantly enhances usability and explainability in real-world scenarios.
Link To Code: https://github.com/botianzhe/LVLM-DFD
Primary Area: Applications->Computer Vision
Keywords: Deepfake Detection, Large Vision-Language Models, Prompt Tuning
Submission Number: 8827
Loading