Keywords: Calibrtion, Uncertainty, Linguistic uncertainty, Finetuning
TL;DR: LLMs finetuned on self-evaluated confidence scores can generate calibrated linguistic expressions of uncertainty.
Abstract: Large language models (LLMs) are increasingly employed in information-seeking and decision-making tasks. Despite their broad utility, LLMs tend to generate information that conflict with real-world facts, and their persuasive style can make these inaccuracies appear confident and convincing. As a result, end-users struggle to consistently align the confidence expressed by LLMs with the accuracy of their predictions, often leading to either blind trust in all outputs or a complete disregard for their reliability. In this work, we explore supervised fine-tuning on uncertainty-augmented predictions as a method to develop models that produce linguistic expressions of uncertainty. Specifically, we measure the calibration of pre-trained models and fine-tune language models to generate calibrated linguistic expressions of uncertainty. Through experiments on various question-answering datasets, we demonstrate that LLMs are well-calibrated in assessing their predictions, and supervised fine-tuning based on the model’s own confidence leads to well-calibrated expressions of uncertainty, particularly for single-claim answers.
Submission Number: 7
Loading