Towards Consistent Language Models Using Controlled Prompting and Decoding

Published: 11 Dec 2023, Last Modified: 05 Feb 2024NuCLeaR 2024EveryoneRevisionsBibTeX
Keywords: large language model, constraint, controlled generation, prompting, consistent query answering
TL;DR: We propose an approach for reducing inconsistencies in pretrained LLMs. Our findings indicate the benefits of an end-to-end system that leverages constraints in both the prompt and decoder for addressing inconsistencies in LLMs.
Abstract: Large language models (LLMs) have shown unprecedented abilities in generating linguistically coherent and syntactically correct natural language output. However, they often return incorrect and inconsistent answers to input questions. Due to the complexity and uninterpretability of the internally learned representations, it is challenging to modify LLMs such that they provide correct and consistent results. To address this challenge, recent research has focused on controlling the outputs of LLMs through methods like constrained optimization and probabilistic inference. While these approaches mark significant progress, they have limitations in terms of usability, efficiency, and linguistic coherence. Some methods require extensive fine-tuning, making them less practical for general use, while others compromise the linguistic quality of the output. In order to address these limitations, we explore adding constraints to the prompt. Our experimental findings reveal that this approach significantly reduces the need for model fine-tuning and enhances the quality of the outputs, leading to improvements in efficiency and in the linguistic coherence of the generated output. These findings highlight the importance of end-to-end solutions, where prompts and decoders work together in addressing inconsistencies in LLMs.
Submission Number: 14
Loading