Visual attention-based deepfake video forgery detection

Published: 01 Jan 2022, Last Modified: 13 Nov 2024Pattern Anal. Appl. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The prime goal of creating synthetic digital data is to generate something very closer to real ones when the original data are scarce. However, the trustworthiness of such digital content is dipping potentially in society owing to malicious users. Deepfake method that uses computer graphics and computer vision techniques to replace the face of one person with the face of a different person is becoming an area of big concern. Such techniques can easily be used to hide the identity of a person. Therefore, a method is needed to verify the originality of such face images/videos. To this end, we design a deep learning model enhanced with visual attention technique to differentiate manipulated videos/images (generated by deepfake methods) from real ones. At first, we extract the face region from video frames and then pass the same through the pre-trained Xception model to obtain the feature maps. Next, with the help of the visual attention mechanism, we mainly try to focus on the deepfake video manipulation leftover artifacts. We evaluate our model on two publicly available datasets, namely FaceForensics++ and Celeb-DF (V2), and our model outperforms many state-of-the-art methods tested on these two datasets. Source code of the proposed method can be found at: https://github.com/tre3x/Deepfake-Video-Forgery-Detection.
Loading