Abstract: Logit-based LLM watermarking traces and verifies AI-generated content by maintaining green and red token lists and increasing the likelihood of green tokens during generation.
However, it struggles in low-entropy scenarios, where predictable outputs make green token selection difficult without disrupting natural text flow.
Existing approaches address this by assuming access to the original LLM to calculate entropy and selectively watermark high-entropy tokens.
However, these methods face two major challenges: (1) high computational costs and potential risks of model leakage due to LLM access, and (2) difficulty in defining low-entropy thresholds.
To address these limitations, we propose Invisible Entropy (IE), a watermarking paradigm designed to enhance both safety and efficiency.
Instead of relying on the original LLM, IE introduces a lightweight feature extractor and an entropy tagger to predict whether the entropy of the next token is high or low.
Furthermore, based on theoretical analysis, we developed a threshold navigator that adaptively sets entropy thresholds.
It identifies a threshold where the watermark ratio decreases as the green token count increases, enhancing the naturalness of the watermarked text and improving detection robustness.
Experiments on HumanEval and MBPP datasets demonstrate that IE reduces parameter size by 99\% while achieving performance on par with state-of-the-art methods. https://anonymous.4open.science/r/IE-Official/
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: security
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 3696
Loading