Enhancing LLM Watermark Resilience Against Both Scrubbing and Spoofing Attacks

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 spotlightEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine learning security, llm watermarking
TL;DR: We propose SEEK, a novel watermarking method for large language models that simultaneously improves robustness against both scrubbing and spoofing attacks, achieving a Pareto-optimal balance superior to existing approaches.
Abstract: Watermarking is a promising defense against the misuse of large language models (LLMs), yet it remains vulnerable to scrubbing and spoofing attacks. This vulnerability stems from an inherent trade-off governed by watermark window size: smaller windows resist scrubbing better but are easier to reverse-engineer, enabling low-cost statistics-based spoofing attacks. This work expands the trade-off boundary by introducing a novel mechanism, equivalent texture keys, where multiple tokens within a watermark window can independently support the detection. Based on the redundancy, we propose a watermark scheme with **S**ub-vocabulary decomposed **E**quivalent t**E**xture **K**ey (**SEEK**). It achieves a Pareto improvement, increasing the resilience against scrubbing attacks without compromising robustness to spoofing. Our code will be available at [https://github.com/Hearum/SeekWM](https://github.com/Hearum/SeekWM).
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 770
Loading