Keywords: uncertainty, correctness, large language models, concepts
Abstract: We study the problem of evaluating the predictive uncertainty of large language models (LLMs).
We assign an uncertainty measure to the correctness of outputs from an LLM conditioned on a query using a form of entropy that applies to semantic objects (concepts).
Unlike prior works, the notion of meaning used to define concepts is derived from the LLM, rather than from
an external model.
Our method measures an uncertainty over concept structures by drawing from ideas in Formal Concept Analysis (FCA) and lattice/order theory, and can be used to estimate correctness in closed- and open-ended scenarios.
Our method has a relative improvement of up to 4.8% on average across five standard benchmarks as well as improves over comparable baselines on datasets consisting of both closed- and open-ended questions.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13111
Loading