CML-Bench: A Framework for Evaluating and Enhancing LLM-Powered Movie Scripts Generation

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Creative Text Benchmarking, AI in Filmmaking, Large Language Models
TL;DR: We introduce CML-Bench, a new framework with a benchmark to evaluate and an intermediate syntax CML to help Large Language Models write much better, more structured, and coherent movie scripts.
Abstract: Large Language Models (LLMs) have demonstrated remarkable proficiency in generating highly structured texts. However, while exhibiting a high degree of structural organization, movie scripts demand an additional layer of nuanced storytelling and emotional depth—the 'soul' of compelling cinema—that LLMs often fail to capture. To investigate this deficiency, we first curated CML-Dataset, a dataset comprising (summary, content) pairs for Cinematic Markup Language (CML), where 'content' consists of segments from esteemed, high-quality movie scripts and 'summary' is a concise description of the content. Through an in-depth analysis of the intrinsic multi-shot continuity and narrative structures within these authentic scripts, we identified three pivotal dimensions for quality assessment: Dialogue Coherence (DC), Character Consistency (CC), and Plot Reasonableness (PR). Informed by these findings, we propose the CML-Bench, featuring quantitative metrics across these dimensions. CML-Bench effectively assigns high scores to well-crafted, human-written scripts while concurrently pinpointing the weaknesses in screenplays generated by LLMs. To further validate our benchmark, we introduce CML-Instruction, a prompting strategy that provides detailed instructions on character dialogue and event logic to guide LLMs in generating more structured and cinematically sound scripts. Extensive experiments validate the effectiveness of our benchmark and demonstrate that LLMs guided by CML-Instruction generate higher-quality screenplays, with results aligned with human preferences. Our work offers a comprehensive framework for both evaluating and guiding LLMs in screenplay authoring.
Primary Area: datasets and benchmarks
Submission Number: 15436
Loading