Finding Flawed Fictions: Evaluating Complex Reasoning in Language Models via Plot Hole Detection

Published: 08 Jul 2025, Last Modified: 26 Aug 2025COLM 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: narrative understanding, reasoning, synthetic data generation, test time scaling, evaluation, natural language generation
TL;DR: We propose an automatic method to generate stories with logical inconsistencies, which we then use to curate a benchmark to evaluate capabilities of LLMs towards reasoning for plot holes in stories.
Abstract: Stories are a fundamental aspect of human experience. Engaging deeply with stories and spotting plot holes—inconsistencies in a storyline that break the internal logic or rules of a story’s world—requires nuanced reasoning skills, including tracking entities and events and their interplay, abstract thinking, pragmatic narrative understanding, commonsense and social reasoning, and theory of mind. As Large Language Models (LLMs) increasingly generate, interpret, and modify text, rigorously assessing their narrative consistency and deeper language understanding becomes critical. However, existing benchmarks focus mainly on surface-level comprehension. In this work, we propose plot hole detection in stories as a proxy to evaluate language understanding and reasoning in LLMs. We introduce FlawedFictionsMaker, a novel algorithm to controllably and carefully synthesize plot holes in human-written stories. Using this algorithm, we construct a benchmark to evaluate LLMs’ plot hole detection abilities in stories —FlawedFictions—robust to contamination, with human filtering ensuring high quality. We find that state-of-the-art LLMs struggle in accurately solving FlawedFictions regardless of the reasoning effort allowed, with performance significantly degrading as story length increases. Finally, we show that LLM-based story summarization and story generation are prone to introducing plot holes, with 50%+ and 100%+ increases in plot hole detection rates with respect to human-written originals.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1693
Loading