Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language ModelsDownload PDF

24 Apr 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: We explore the idea of compressing the prompts used to condition language models, and show that compressed prompts can re- tain a substantive amount of information about the original prompt. For severely compressed prompts, while fine-grained information is lost, abstract information and general senti- ments can be retained with surprisingly few pa- rameters, which can be useful in the context of decode-time algorithms for controllability and toxicity reduction. We explore contrastive conditioning to steer language model gener- ation towards desirable text and away from undesirable text, and find that some complex prompts can be effectively compressed into a single token to guide generation. We also show that compressed prompts are largely composi- tional, and can be constructed such that they can be used to control independent aspects of generated text.
0 Replies

Loading