Generation Space Size: Understanding and Calibrating Open-Endedness of LLM Generations

ICLR 2026 Conference Submission13263 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, generation space, uncertainty quantification, calibration
TL;DR: We introduce generation space size (GSS) and GSSBench, an evaluation framework for metrics' representational ability of GSS and models's calibration of GSS and establish connections to LLM grounding, reasoning, and diversity optimization.
Abstract: Different open-ended generation tasks require different degrees of output diversity. However, current LLMs are often miscalibrated. They collapse to overly homogeneous outputs for creative tasks and hallucinate diverse but incorrect responses for factual tasks. We argue that these two failure modes are unified by, and can both be addressed by, the notion of *effective generation space size* (GSS) --- the set of semantically distinct outputs a model considers for a prompt. We present GSSBench, a task suite of prompt pairs with ground-truth GSS relationships to assess different metrics and understand where models diverge from desired behavior. We find that hallucination detection metrics, particularly EigenScore, consistently outperform standard diversity and uncertainty quantification metrics, while using only model internals, providing interpretable insights into a model's internal task representations. We demonstrate three applications of GSS: (1) detecting prompt ambiguity and predicting clarification questions for better grounding, (2) interpreting overthinking and underthinking in reasoning models, and (3) steering models to expand their generation space to yield high-quality and diverse outputs.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 13263
Loading