Human-Simulation Interaction: From Prediction to Exploration in LLM Agent Simulations for Policy

Published: 09 May 2026, Last Modified: 09 May 2026PoliSim@CHI 2026EveryoneRevisionsCC BY 4.0
Keywords: agent-based modeling, LLM agent simulation, generative agents, human-simulation interaction, policy simulation, interaction de- sign, design metaphor, wicked problems
TL;DR: Trust in policy simulations is not just an issue of model accuracy but of the inherited design of human-simulation interaction.
Abstract: Agent-based models have historically served as tools for generative explanation, constructing testbeds in which candidate micro-level behavioral rules can be tested for their capacity to produce observed macro-level phenomena. The integration of Large Language Models into agent-based simulation has expanded what these models can represent, but it has also introduced an unexamined shift in how users engage them. We argue that current generative agent-based models (GABMs) inherit the dominant interaction metaphor of conversational LLM interfaces - a question-answer pattern that positions users as consumers of system output rather than explorers of a possibility space. In the context of policy, where problems are wicked and ground truth is unknowable in advance, this metaphor produces a trust deficit that cannot be resolved through improved model accuracy alone. We open a design space we call human-simulation interaction, and argue that warranted trust requires interaction metaphors that restore the exploratory capacity simulation has historically supported.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 17
Loading