Abstract: Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often struggle to accurately express the factual knowledge they possess, especially in cases where the knowledge boundaries are ambiguous. To improve LLMs’ factual expressions, we propose the UAlign framework, which leverages Uncertainty estimations to represent knowledge boundaries, and then explicitly incorporates these representations as input features into prompts for LLMs to Align with factual knowledge. First, we prepare the dataset on knowledge question-answering (QA) samples by calculating two uncertainty estimations of confidence score and semantic entropy to represent the knowledge boundaries. Subsequently, on the prepared dataset, we train a reward model that incorporates uncertainty estimations and then employ the Proximal Policy Optimization (PPO) algorithm for factuality alignment on LLMs. Experimental results indicate that, by integrating uncertainty representations in LLM alignment, the proposed UAlign can significantly enhance the LLMs’ capacities to confidently answer known questions and refuse unknown questions, showing reliability improvements and good generalization ability over various prompt- and training-based baselines.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Generation; Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 349
Loading