Abstract: The operational need for structured data from Large Language Models (LLMs) is in direct conflict with the cognitive processes that foster creativity. While formats like JSON are essential for downstream applications, this paper investigates the critical, unquantified cost of such constraints on creative performance. We conducted a large-scale analysis across multiple creative tasks, comparing the creativity of LLM-generated responses in a freeform text baseline against six structured formats. Our results reveal that forcing structured output degrades creativity—on average by over 17\% when models must infer a JSON structure, and by up to 26\% in the most severe cases. We deconstruct this degradation into a dominant "creative constraint" effect, where the cognitive load of simultaneous creation and formatting harms ideation, and a weaker, opposing trend of "format bias," where LLM judges slightly prefer well-structured output. The former effect outweighs the latter. Consequently, we propose and validate a "generate-then-structure" workflow as a practical solution that mitigates this degradation, improving both the substance and perceived quality of creative work.
Paper Type: Short
Research Area: Generation
Research Area Keywords: analysis, text-to-text generation, automatic evaluation, inference methods
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: analysis, text-to-text generation, automatic evaluation, inference methods
Submission Number: 539
Loading