Keywords: sampling, self-consistency, trustworthiness, gumbel, reparametrization, semantic, language model, LLM
TL;DR: We introduce Gumbel consistency sampling, a straightforward and computationally inexpensive sampling approach for increasing consistency amongst language model responses.
Abstract: Consistency in the output of language models is critical for their reliability and practical utility. Due to their training objective, language models learn to model the full space of possible continuations, leading to outputs that can vary significantly in style, content, and tone, even for similar inputs. To address this, we propose a novel decoding algorithm that enhances response consistency across different prompts with no degradation in response quality. By incorporating a latent variable into the next-token sampling process based on the Gumbel reparametrisation trick, our method outperforms standard sampling by up to 10\% across semantic and stylistic consistency benchmarks. Additionally, our approach integrates seamlessly with existing sampling methods with negligible computational overhead, providing a practical solution for improving the reliability of language model outputs.
Submission Number: 112
Loading