DeepMark Benchmark: Redefining Audio Watermarking Robustness

Published: 06 Mar 2025, Last Modified: 16 Apr 2025WMARK@ICLR2025EveryoneRevisionsBibTeXCC BY 4.0
Track: long paper (up to 9 pages)
Keywords: deep learning; audio watermarking; benchmark; ai attacks, process disruption attacks; audio editing attacks; desynchronization attacks
TL;DR: A novel and comprehensive framework for evaluating the robustness of audio watermarking algorithms.
Abstract: This paper introduces DeepMark Benchmark, a novel and comprehensive framework for evaluating the robustness of audio watermarking algorithms. Designed with modularity and scalability in mind, the benchmark enables systematic testing of watermarking methods against a diverse set of attacks. These include basic audio editing operations, advanced desynchronization techniques, and deep learning-based attacks that leverage generative models and neural processing methods. Additionally, we introduce a new class of attacks, termed Process Disruption Attacks, which target generative AI (GenAI) platforms. These attacks do not rely on prior knowledge of the system’s architecture or signal processing methods and can arise inadvertently within the GenAI workflows. The code is available at: https://github.com/deepmarkpy/deepmarkpy-benchmark.
Presenter: ~Slavko_Kovačević1
Format: Yes, the presenting author will attend in person if this work is accepted to the workshop.
Funding: Yes, the presenting author of this submission falls under ICLR’s funding aims, and funding would significantly impact their ability to attend the workshop in person.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 17
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview