Plausibly Deniable Encryption with Large Language Models

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: large language models, LLM, deniable encryption, compression
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We present a technique for plausibly deniable encryption which combines the statistical properties of large language models (LLMs) with conventional encryption algorithms.
Abstract: We present a novel approach for achieving plausible deniability in cryptography by harnessing the power of large language models (LLMs) in conjunction with conventional encryption algorithms. Leveraging the inherent statistical properties of LLMs, we design an encryption scheme that allows the same ciphertext to be decrypted with any key, while still yielding a plausible message. Unlike established methods, our approach neither relies on a fixed set of decoy keys or messages nor introduces redundancy. Our method is founded on the observation that language models can be used as encoders to compress a low-entropy signal (such as natural language) into a stream indistinguishable from noise, and similarly, that sampling from the model is equivalent to decoding a stream of noise. When such a stream is encrypted and subsequently decrypted with an incorrect key, it will lead to a sampling behavior and will thus generate a plausible message. Through a series of experiments, we substantiate the resilience of our approach against various statistical detection techniques. Finally, although we mainly focus on language models, we establish the applicability of our approach to a broader set of generative models and domains, including images and audio.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7935
Loading