Reliable Model Watermarking: Defending Against Theft without Compromising on Evasion

Published: 20 Jul 2024, Last Modified: 06 Aug 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: With the rise of Machine Learning as a Service (MLaaS) platforms, safeguarding the intellectual property of deep learning models is becoming paramount. Among various protective measures, trigger set watermarking has emerged as a flexible and effective strategy for preventing unauthorized model distribution. However, this paper identifies an inherent flaw in the current paradigm of trigger set watermarking: evasion adversaries can readily exploit the shortcuts created by models memorizing watermark samples that deviate from the main task distribution, significantly impairing their generalization in adversarial settings. To counteract this, we leverage diffusion models to synthesize unrestricted adversarial examples as trigger sets. By learning the model to accurately recognize them, unique watermark behaviors are promoted through knowledge injection rather than error memorization, thus avoiding exploitable shortcuts. Furthermore, we uncover that the resistance of current trigger set watermarking against removal attacks primarily relies on significantly damaging the decision boundaries during embedding, intertwining unremovability with adverse impacts. By optimizing the knowledge transfer in protected models during extraction, our approach conveys watermark behaviors without aggressive decision boundary perturbation. Experimental results on CIFAR-10/100 and Imagenette datasets demonstrate the effectiveness of our method, showing not only improved robustness against evasion adversaries but also superior resistance to watermark removal attacks compared to existing state-of-the-art solutions.
Primary Subject Area: [Experience] Multimedia Applications
Relevance To Conference: Digital watermarking stands as a critical frontier in multimedia processing, safeguarding the intellectual property of diverse information mediums, including images, speech, and video. As deep neural networks (DNNs) become indispensable in processing these media, protecting their own intellectual property emerges as a critical concern. This elevates the watermarking of DNN models to a crucial research direction within the field of multimedia intelligence. This study delves into the inherent flaws of prevailing black-box model watermarking techniques, which introduce exploitable shortcuts through poisoning-style watermarking, thus undermining model generalization in adversarial settings. To mitigate these vulnerabilities, we advocate for leveraging diffusion models to generate unrestricted adversarial examples as trigger sets, sidestepping potential risks. Our innovation lies in decoupling watermark robustness against extraction attacks from its adverse effects, a common issue where the unremovability of watermarks results from disruptive decision boundary perturbations. By optimizing the knowledge transfer in protected models during extraction attacks, our approach conveys watermark behaviors without compromising model integrity. This work recalibrates the focus of research in black-box model watermarking and directly contributes to ACM Multimedia's objective of tackling the pressing issues of multimedia content security, thereby making a pivotal contribution to the field.
Supplementary Material: zip
Submission Number: 5030
Loading