New Evaluation Metrics Capture Quality Degradation due to LLM Watermarking

Published: 15 Apr 2024, Last Modified: 15 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: With the increasing use of large-language models (LLMs) like ChatGPT, watermarking has emerged as a promising approach for tracing machine-generated content. However, research on LLM watermarking often relies on simple perplexity or diversity-based measures to assess the quality of watermarked text, which can mask important limitations in watermarking. Here we introduce two new easy-to-use methods for evaluating watermarking algorithms for LLMs: 1) evaluation by LLM-judger with specific guidelines; and 2) binary classification on text embeddings to distinguish between watermarked and unwatermarked text. We apply these methods to characterize the effectiveness of current watermarking techniques. Our experiments, conducted across various datasets, reveal that current watermarking methods are moderately detectable by even simple classifiers, challenging the notion of watermarking subtlety. We also found, through the LLM judger, that watermarking impacts text quality, especially in degrading the coherence and depth of the response. Our findings underscore the trade-off between watermark robustness and text quality and highlight the importance of having more informative metrics to assess watermarking quality.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: N/A
Code: https://github.com/su-karanps/watermark_eval
Assigned Action Editor: ~Makoto_Yamada3
Submission Number: 1939
Loading