Keywords: Large Language Models, Game AI, Pokémon Battles, Strategic Reasoning, Turn-Based Games, LLM Evaluation, Procedural Content Generation
Abstract: Strategic decision-making in Pokémon battles presents a unique testbed for evaluating large language models. Pokémon battles demand reasoning about type matchups, statistical trade-offs, and risk assessment, skills that mirror human strategic thinking. This work examines whether Large Language Models (LLMs) can serve as competent battle agents, capable of both making tactically sound decisions and generating novel, balanced game content. We developed a turn-based Pokémon battle system where LLMs select moves based on battle state rather than pre-programmed logic. The framework captures essential Pokémon mechanics: type effectiveness multipliers, stat-based damage calculations, and multiPokémon team management. Through systematic evaluation across multiple model architectures we measured win rates, decision latency, type-alignment accuracy, and token efficiency. These results suggest LLMs can function as dynamic game opponents without domain-specific training, offering a practical alternative to reinforcement learning for turnbased strategic games. The dual capability of tactical reasoning and content creation, positions LLMs as both players and designers, with implications for procedural generation and adaptive difficulty systems in interactive entertainment.
Paper Type: Long
Research Area: AI/LLM Agents
Research Area Keywords: LLM/AI agents, interactive systems, evaluation and metrics, human-AI interaction/cooperation, chain-of-thought, prompting, benchmarking, evaluation methodologies
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 8790
Loading