From Prediction to Conditional Exploration in LLM Agent Simulations for AI Policy Governance

Published: 09 May 2026, Last Modified: 09 May 2026PoliSim@CHI 2026EveryoneRevisionsCC BY 4.0
Keywords: LLM Agents, Policy, Goverance
Abstract: Large Language Model (LLM)–based agent simulations are increasingly proposed as tools for informing AI governance. However, policymaking is inherently uncertain, high-stakes, and socially embedded. This position paper argues that LLM-agent simulations should not function as predictive policy engines but as structured uncertainty exploration systems. Drawing on generative agent ar- chitectures and recent multi-agent simulation work, I examine two challenges: interpreting conditional simulation outcomes under deep uncertainty, and establishing transparency in architecturally opaque systems. I propose a co-evolutionary framework integrat- ing robustness stress-testing and layered auditability, reframing simulation from prediction toward conditional exploration and institutional accountability in AI policy design.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 11
Loading