Keywords: Uncertainty, LLMs, Verbalized Uncertainty
TL;DR: We measure whether LLMs can write a summary of the distribution of answers that they model.
Abstract: To reveal when a large language model (LLM) is uncertain about a response, uncertainty quantification commonly produces percentage numbers along with the output. But is this all we can do?
We argue that in the output space of LLMs, the space of strings,
exist strings expressive enough to
summarize the _distribution over_ output strings the LLM deems possible.
We lay a foundation for this new avenue of uncertainty explication and present SelfReflect, a theoretically-motivated metric to assess how faithfully a string summarizes an LLM's internal answer distribution. We show that SelfReflect is able to discriminate even subtle differences of candidate summary strings and that it aligns with human judgement, outperforming alternative metrics such as LLM judges and embedding comparisons. With SelfReflect, we investigate a number of self-summarization methods and find that even state-of-the-art reasoning models struggle to explicate their internal uncertainty. But we find that faithful summarizations can be generated by sampling and summarizing.
Submission Number: 31
Loading