How Stable is the Next Token? A Geometric View of LLM Prediction Stability

ICLR 2026 Conference Submission489 Authors

01 Sept 2025 (modified: 23 Dec 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Post Training
Abstract: Large Language Models (LLMs) exhibit impressive capabilities yet suffer from sensitivity to slight input context variations, hampering reliability. Conventional metrics like accuracy and perplexity fail to assess local prediction robustness, as normalized output probabilities can obscure the underlying resilience of an LLM's internal state to perturbations. We introduce the Token Constraint Bound , a novel metric that quantifies the maximum internal state perturbation an LLM can withstand before its dominant next-token prediction significantly changes. Intrinsically linked to output embedding space geometry, provides insights into the stability of the model's internal predictive commitment. Our experiments show correlates with effective prompt engineering and uncovers critical prediction instabilities missed by perplexity during in-context learning and text generation. offers a principled, complementary approach to analyze and potentially improve the contextual stability of LLM predictions.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 489
Loading