Keywords: Hallucinations, Epistemic uncertainty, Degenerate text, Fallback behaviors
TL;DR: When uncertain, LLMs exhibit fallback behaviors like degenerate text and hallucinations. These behaviors are linked and follow a strict order. Factors like pretraining, parameter count, and instruction-following training affect which fallback is used
Abstract: Large language models (LLMs) often exhibit undesirable behaviors, such as hallucinations and sequence repetitions.
We propose to view these behaviors as fallbacks that models exhibit under epistemic uncertainty, and investigate the connection between them.
We categorize fallback behaviors — sequence repetitions, degenerate text, and hallucinations — and extensively analyze them in models from the same family that differ by the amount of pretraining tokens, parameter count, or the inclusion of instruction-following training.
Our experiments reveal a clear and consistent ordering of fallback behaviors, across all these axes:
the more advanced an LLM is (i.e., trained on more tokens, has more parameters, or instruction-tuned),
its fallback behavior shifts from sequence repetitions, to degenerate text, and then to hallucinations.
Moreover, the same ordering is observed during the generation of a single sequence, even for the best-performing models; as uncertainty increases, models shift from generating hallucinations to producing degenerate text and finally sequence repetitions.
Lastly, we demonstrate that while common decoding techniques, such as random sampling, alleviate unwanted behaviors like sequence repetitions, they increase harder-to-detect hallucinations.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2833
Loading