Keywords: AI creativity, Controlled hallucination, Creative Utility Score, Novelty and plausibility, Adaptive agent architecture, Scientific discovery, Responsible AI
TL;DR: This paper shows how controlled hallucinations in large language models, guided by the Creative Utility Score and adaptive regulation, can foster creativity and scientific discovery while safeguarding reliability.
Abstract: Hallucinations in large language models (LLMs) are widely regarded as failures that undermine reliability. Yet, in human cognition, speculative ideas that initially lack verification have often served as the seeds of creativity and discovery. This paper advances the hypothesis that hallucinations, when systematically controlled, can be reframed as mechanisms for creative ideation.
We introduce the Creative Utility Score (CUS), a novel metric that balances novelty against plausibility, and propose an adaptive agent architecture that dynamically regulates hallucination intensity across exploratory, grounding, and adaptive modes. Our framework operationalizes a creativity-inspired cycle of divergent and convergent reasoning, enabling AI systems to generate bold hypotheses while safeguarding factual accuracy.
Empirical evaluations in mathematics and biomedicine demonstrate that adaptive control significantly increases the production of novel and useful conjectures, while preserving verification success and calibration. These findings establish hallucination not as an error to suppress, but as a resource to channel responsibly.
By reframing hallucination as creativity with safeguards, this work provides both a theoretical foundation and a practical pathway for AI systems that aspire not only to replicate knowledge, but to expand the frontier of scientific discovery. All code and experiments are openly available at: https://github.com/myai007/AI_Creativity to ensure full reproducibility.
Submission Number: 119
Loading