Keywords: Large Language Models, AI for Social Good, Hallucination and Confabulation, Narrative Modeling, Data Contamination and Memorization, Computational Creativity, Evidence-Grounded Generation
TL;DR: We introduce critical confabulation (evidence-constrained LLM hallucination for reparative storytelling) and evaluate LLMs in this task via open-ended timeline reconstruction; we show that controlled hallucinations have unique social affordances.
Abstract: LLMs hallucinate, yet some confabulations can have social affordances if carefully bounded. We propose critical confabulation (inspired by critical fabulation from literary and social theory), the use of LLM hallucinations to "fill-in-the-gap'' for omissions in archives due to social and political inequality, and reconstruct divergent yet evidence-bound narratives for history's "hidden figures''. We simulate these gaps with an open-ended narrative cloze task: asking LLMs to generate a masked event in a character-centric timeline sourced from a novel corpus of unpublished texts. We evaluate audited (for data contamination), fully-open models (the OLMo-2 family) and unaudited open-weight and proprietary baselines under a range of prompts designed to elicit controlled and useful hallucinations. Our findings validate LLMs' foundational narrative understanding capabilities to perform critical confabulation, and show how controlled and well-specified hallucinations can support LLM applications for knowledge production without collapsing speculation into a lack of historical accuracy and fidelity.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 22924
Loading