FakeMark: Deepfake Speech Attribution With Watermarked Artifacts

ICLR 2026 Conference Submission17145 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deepfake attribution, deepfake speech, audio watermarking, synthetic artifacts, source tracing
TL;DR: A watermarking framework tailored for robust deepfake speech attribution
Abstract: Deepfake speech attribution remains challenging for existing solutions. Classifier-based solutions often fail to generalize to domain-shifted samples, and watermarking-based solutions are easily compromised by distortions like codec compression or malicious removal attacks. To address these issues, we propose FakeMark, a novel watermarking framework that injects artifact-correlated watermarks associated with deepfake systems rather than predefined bitstring messages. This design allows a detector to attribute the source system by leveraging both injected watermark and intrinsic deepfake artifacts, remaining effective even if one of these cues is elusive or removed. Experimental results show that FakeMark improves generalization to cross-dataset samples where classifier-based solutions struggle and maintains high accuracy under various distortions where conventional watermarking-based solutions fail. Speech samples are available at https://fakemark-demo.github.io/fakemark-demo/.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17145
Loading