TL;DR: We propose XAttnMark, a cross-attention-based audio watermarking system that achieves robust detection and accurate attribution, guided by a psychoacoustic-aligned temporal-frequency masking loss.
Abstract: The rapid proliferation of generative audio synthesis and editing technologies has raised significant concerns about copyright infringement, data provenance, and the spread of misinformation through deepfake audio. Watermarking offers a proactive solution by embedding imperceptible, identifiable, and traceable marks into audio content. While recent neural network-based watermarking methods like WavMark and AudioSeal have improved robustness and quality, they struggle to achieve both robust detection and accurate attribution simultaneously. This paper introduces the Cross-Attention Robust Audio Watermark (XAttnMark), which bridges this gap by leveraging partial parameter sharing between the generator and the detector, a cross-attention mechanism for efficient message retrieval, and a temporal conditioning module for improved message distribution. Additionally, we propose a psychoacoustic-aligned temporal-frequency masking loss that captures fine-grained auditory masking effects, enhancing watermark imperceptibility. Our approach achieves state-of-the-art performance in both detection and attribution, demonstrating superior robustness against a wide range of audio transformations, including challenging generative editing with strong editing strength. This work represents a significant step forward in protecting intellectual property and ensuring the authenticity of audio content in the era of generative AI.
Lay Summary: People are now able to edit or generate music and speech with artificial-intelligence tools, but this freedom brings risks: copied songs can be passed off as new, and fake audio can spread online without a clear way to track who made it. Our study introduces a new “audio watermark” that hides an invisible code inside sound while leaving what you hear unchanged. The code can be read even after the audio is compressed, trimmed, mixed, or put through powerful AI editors—tasks that defeat earlier watermarks. We achieve this by letting the part that inserts the code and the part that reads it share helpful clues and by teaching the system to tuck the code into places where the human ear is naturally less sensitive. In tests across many types of music and speech, the watermark stayed intact and the sound quality stayed high, offering a practical step toward protecting creators and spotting fakes.
Primary Area: Social Aspects->Privacy
Keywords: Audio Watermarking, Source Attribution, Watermark Robustness
Submission Number: 561
Loading