Undetectable Steganography for Language Models

TMLR Paper2788 Authors

03 Jun 2024 (modified: 25 Jun 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We introduce a cryptographic method to hide an arbitrary secret payload in the response of a Large Language Model (LLM). A secret key is required to extract the payload from the model's response, and without the key it is provably impossible to distinguish between the responses of the original LLM and the LLM that hides a payload. In particular, the quality of generated text is not affected by the payload. Our approach extends a recent result of Christ, Gunn and Zamir (2023) who introduced an undetectable watermarking scheme for LLMs.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Edward_Grefenstette1
Submission Number: 2788
Loading