Abstract: The rapid development of deepfake technology poses a formidable challenge to personal privacy and security, underscoring the urgent need for deepfake detection. Recently, the methods based on the reconstruction error, such as DIRE and RECCE, achieve impressive performance in forgery detection. However, their performance on facial forgery datasets is relatively poor. The reconstruction process is performed on the whole images, neglecting contextual information for reconstruction. In this paper, we propose Partial Reconstruction Error to perform deepfake detection based on the reconstruction of masked regions in an image. In this way, contextual information helps to reveal the inconsistencies between the original and reconstructed regions thereby improving the detection performance. This method outperforms the best global reconstruction-based approaches on the FF++, Celeb-DF, and DiFF datasets by 4.00%, 2.83%, and 2.67%, respectively.
External IDs:dblp:conf/icassp/ZhangMPDCW25
Loading