FakeMark: Deepfake Speech Attribution With Watermarked Artifacts

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: deepfake attribution, deepfake speech, audio watermarking, synthetic artifacts, source tracing
TL;DR: A watermarking framework tailored for robust deepfake speech attribution
Abstract: Deepfake speech attribution remains challenging for existing solutions. Classifier-based solutions often fail to generalize to domain-shifted samples, and watermarking-based solutions can be compromised by distortions like codec compression or malicious removal attacks. To address these issues, we propose FakeMark, a novel watermarking framework that injects artifact-correlated watermarks associated with deepfake generation systems rather than pre-assigned bitstring messages. These watermarks, referred to as system signatures, enable a classifier-based decoder to attribute the source system by jointly leveraging the injected signature and intrinsic deepfake artifacts, remaining effective even if one of these cues is elusive or removed. Experimental results show that FakeMark improves generalization to cross-dataset samples where classifier-based solutions struggle and maintains high accuracy under various distortions where conventional watermarking-based solutions fail. Speech samples are available at https://fakemark-demo.github.io/fakemark-demo/.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17145
Loading