Keywords: Video Face forgery detection, Frequency Features, Wavelet-Denoised Feature, Local Binary Pattern, Spatial-Phase Shallow Learning, Lightweight fusion block
TL;DR: Lightweight fusion of Wavelet-Denoised Feature with either Local Binary Patterns or Spatial-Phase Shallow Learning cues outperforms larger models in video face forgery detection.
Abstract: Current face video forgery detectors use wide or dual-stream backbones. We show that a single, lightweight fusion of two handcrafted cues can achieve higher accuracy with a much smaller model. Based on the Xception baseline model (21.9 million parameters), we build two detectors: LFWS, which adds a 1x1 convolution to combine a low-frequency Wavelet-Denoised Feature (WDF) with the phase-only Spatial-Phase Shallow Learning (SPSL) map, and LFWL, which merges WDF with Local Binary Patterns (LBP) in the same way. This extra module adds only 292 parameters, keeping the total at 21.9 million—smaller than F3Net (22.5 million) and less than half the size of SRM (55.3 million). Even with this minimal overhead, the fused models increase the average area under the curve (AUC) from 74.8% to 78.6% on FaceForensics++ and from 70.5% to 74.9% on DFDC-Preview, gains of 3.8% and 4.4% over the Xception baseline. They also consistently outperform F3Net, SRM, and SPSL in eight public benchmarks, without extra data or test-time augmentation. These results show that carefully paired, handcrafted features, combined through the lightweight fusion block, can provide state-of-the-art robustness at a significantly lower cost. Our findings suggest a need to reevaluate scale-driven design choices in face video forgery detection.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 20969
Loading