From Higher-Likelihood to Broader Coverage: Exploratory Policy Simulations with LLM Agents
Keywords: Large Language Models, Policy Simulation, Exploratory Modeling, Agent-Based Simulation, Action Enumeration, Institutional Analysis
TL;DR: Shifting LLM simulations to coverage-oriented generation via enumeration reveals diverse edge cases, demonstrating the potential of LLM agent simulation as a proactive tool to help policymakers stress-test institutional rules.
Abstract: Robust policy simulation requires identifying not only likely outcomes but also diverse failure modes and edge cases. However, standard Large Language Model (LLM) agents typically converge on high-probability, profile-bounded trajectories, limiting their utility for exploratory analysis. To address this coverage bottleneck, we adopt enumeration as a turn-level action generation mechanism to elicit broader behavioral possibilities. We evaluate this approach in a rule-governed dormitory conflict scenario grounded in the Institutional Analysis and Development (IAD) framework. Comparing standard persona-prompted agents with enumeration-enhanced agents, we find that turn-level enumeration significantly expands semantic and behavioral diversity without compromising dialogue naturalness. Our analysis reveals that this method uncovers complex escalation patterns missed by standard approaches, demonstrating that shifting from likelihood-maximization to coverage-oriented generation enables LLM agents to serve as effective instruments for exploratory policy analysis.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 22
Loading