The Dance of Hallucination and Creativity in LLMs’ Decoding Layers via the Lens of Question Answering

ACL ARR 2025 July Submission1141 Authors

29 Jul 2025 (modified: 23 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) are known to hallucinate, a phenomenon often linked to creativity. Built upon prior research that focuses on theoretical or qualitative analyses, our work uses a quantitative approach to systematically examine the relationship between hallucination and creativity in LLMs. Given the complex nature of creativity, we take the inspiration from philosophy and propose a creativity definition tailored to LLMs in Question Answering (QA) tasks. Further, we introduce an evaluation framework, *HCL*, to examine the relationship between **H**allucination and **C**reativity across different **L**ayers of LLMs during decoding. Our empirical analysis reveals a tradeoff between hallucination and creativity that is consistent across layer depth, model type, and model size. Notably, across different model architectures, we identify a specific layer at each model size that optimally balances this tradeoff. The optimal layer tends to appear in the early layers of larger models, and the confidence of the model is significantly higher at this layer. These findings provide a quantitative perspective that offers new insights into the interplay between LLM creativity and hallucination.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: hierarchical & concept explanations
Contribution Types: Model analysis & interpretability
Languages Studied: English
Previous URL: https://openreview.net/forum?id=vW0wxqNryV
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: No, I want the same set of reviewers from our previous submission (subject to their availability)
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: In Ethics Statement
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: In Reference
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: In Appendix A and Section 4.1
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: In Appendix B
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: In Appendix B
C3 Descriptive Statistics: N/A
C4 Parameters For Packages: N/A
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: No
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 1141
Loading