TL;DR: We introduce the first logits-based end-to-end model for LLM watermarking, where encoder and decoder networks are jointly optimized to improve detection robustness and text quality.
Abstract: The rise of LLMs has increased concerns over source tracing and copyright protection for AIGC, highlighting the need for advanced detection technologies. Passive detection methods usually face high false positives, while active watermarking techniques using logits or sampling manipulation offer more effective protection. Existing LLM watermarking methods, though effective on unaltered content, suffer significant performance drops when the text is modified and could introduce biases that degrade LLM performance in downstream tasks. These methods fail to achieve an optimal tradeoff between text quality and robustness, particularly due to the lack of end-to-end optimization of the encoder and decoder. In this paper, we introduce a novel end-to-end logits perturbation method for watermarking LLM-generated text. By joint optimization, our approach achieves a better balance between quality and robustness. To address non-differentiable operations in the end-to-end training pipeline, we introduce an online-prompting technique that leverages the on-the-fly LLM as a differentiable surrogate. Our method achieves superior robustness, outperforming distortion-free methods by 37–39% under paraphrasing and 17.2% on average, while maintaining text quality on par with the distortion-free methods in terms of text perplexity and downstream tasks. Our method can be easily generalized to different LLMs. Code is available at https://github.com/KAHIMWONG/E2E_LLM_WM.
Lay Summary: Large language models can generate text that is virtually indistinguishable from human writing, making it hard to trace AI-generated content or enforce copyright protection. Passive detection methods often produce high false-positive rates, and existing watermarking techniques tend to break when the text is edited or introduce biases that degrade LLM performance.
We introduce an end-to-end watermarking method that subtly perturbs the LLM’s output logits via a jointly optimized encoder–decoder pair. To handle non-differentiable parts of this pipeline, we employ an online prompting technique that treats the LLM itself as a differentiable surrogate during training. This design lets us integrate watermark insertion seamlessly into the text-generation process.
Our approach remains robust under paraphrasing and other transformations, boosting detection performance by up to 39% under paraphrasing and 17% on average. At the same time, it preserves text fluency, perplexity, and downstream-task performance on par with existing distortion-free methods. Because it can be applied to any LLM, this versatile solution helps reliably trace AI-generated text and safeguard intellectual property.
Link To Code: https://github.com/KAHIMWONG/E2E_LLM_WM
Primary Area: Social Aspects->Safety
Keywords: LLM watermarking, End-to-end optimization, Robustness
Submission Number: 14656
Loading