Learning to Play Like Humans: A Framework for LLM Adaptation in Interactive Fiction Text-Based GAMEs
Abstract: Interactive Fiction text-based adventure games (IF games) are where players interact through natural language commands. While recent advances in Artificial Intelligence agents have reignited interest in IF games as a domain for studying decision-making, existing approaches prioritize task-specific performance metrics over human-like comprehension of narrative context and gameplay logic. This work presents a cognitively inspired framework that guides Large Language Models (LLMs) to learn and play IF games systematically. Our proposed $\textbf{L}$earning to $\textbf{P}$lay $\textbf{L}$ike $\textbf{H}$umans (LPLH) framework integrates three key components: (1) structured map building to capture spatial and narrative relationships, (2) action learning to identify context-appropriate commands, and (3) feedback-driven experience analysis to refine decision-making over time. By aligning agent behavior with narrative intent and commonsense constraints, LPLH moves beyond purely exploratory strategies to deliver more interpretable, human-like performance. Crucially, this approach draws on cognitive science principles to more closely simulate how human players read, interpret, and respond within narrative worlds. As a result, LPLH reframes the IF games challenge as a learning problem for LLMs-based agents, offering a new path toward robust, context-aware gameplay in complex text-based environments.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: NLP Application
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 4124
Loading