Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries

Published: 10 Oct 2024, Last Modified: 04 Dec 2024NeurIPS 2024 Workshop RBFM PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: hallucinations, uncertainty, prompts
TL;DR: We propose 4 rephrasing methods that are simple to use and lead to good uncertainty estimates when interacting with LLMs.
Abstract: We explore estimating the uncertainty of closed-source LLMs via multiple rephrasings of an original base query. Specifically, we ask the model, multiple rephrased questions, and use the similarity of the answers as an estimate of uncertainty. We diverge from previous work in i) providing rules for rephrasing that are simple to memorize and use in practice ii) proposing a theoretical framework for why multiple rephrased queries obtain calibrated uncertainty estimates. Our method demonstrates significant improvements in the calibration of uncertainty estimates compared to the baseline and provides intuition as to how query strategies should be designed for optimal test calibration.
Submission Number: 13
Loading