Abstract: Consistency is a fundamental dimension of trustworthiness in Large Language Models (LLMs). For humans to be able to trust LLM-based applications, their outputs should be consistent when prompted with inputs that carry the same meaning or intent. Despite this need, there is no known mechanism to control and guide LLMs to be more consistent at inference time. In this paper, we introduce a novel alignment strategy to maximize semantic consistency in LLM outputs. Our proposal is based on \textbf{Chain of Guidance} (CoG), a multistep prompting technique that generates highly consistent outputs from LLMs. For closed-book question-answering (Q\&A) tasks, when compared to direct prompting, the outputs generated using CoG show improved consistency. While other approaches like template-based responses and majority voting may offer alternative paths to consistency, our work focuses on exploring the potential of guided prompting. We use synthetic data sets comprised of consistent input-output pairs to fine-tune LLMs to produce consistent {\it and} correct outputs. Our fine-tuned models are more than twice as consistent compared to base models and show strong generalization capabilities by producing consistent outputs over datasets not used in the fine-tuning process. Code is available at \url{https://github.com/vijilAI/chain_of_guidance}.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=asiBW1bB9b
Changes Since Last Submission: Updated camera-ready revision, addressing the AE's follow-up comments.
Code: https://github.com/vijilAI/chain_of_guidance
Assigned Action Editor: ~Greg_Durrett1
Submission Number: 3446
Loading