Keywords: large language models, model behavior, interpretability, preferences of LLMs, narrative preferences, narratology, structured constraint selection, selection task, prompt sensitivity
Abstract: We introduce a constraint-selection-based experiment design for measuring narrative preferences of Large Language Models (LLMs). This design offers an interpretable lens on LLMs' narrative behavior. We developed a library of 200 narratology-grounded constraints and prompted selections from six LLMs under three different instruction types: basic, quality-focused, and creativity-focused. Findings demonstrate that models consistently prioritize Style over narrative content elements like Event, Character, and Setting. Style preferences remain stable across models and instruction types, whereas content elements show cross-model divergence and instructional sensitivity. These results suggest that LLMs have latent narrative preferences, which should inform how the NLP community evaluates and deploys models in creative domains.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: model behavior analysis; feature attribution; probing; robustness; selection task; values and culture
Contribution Types: Model analysis & interpretability, Data analysis
Languages Studied: English
Submission Number: 7849
Loading