Let Them Speak: Finding Who Policy Leaves Behind via Blind-Spot-Targeted LLM Simulation
Keywords: LLM Simulation, Policy Auditing, Legislative Blind Spots, Agent- Based Modeling, Algorithmic Fairness
TL;DR: Instead of simulating everyone, we target a policy's blind spots with LLM personas and generate amendments that match real legislative revisions.
Abstract: LLM-based social simulation has gained attention as a tool for predicting policy outcomes, yet growing evidence suggests that LLMs cannot faithfully represent the full diversity of society. We turn this limitation into a feature. Rather than simulating all citizens, we propose Let Them Speak, a framework that deliberately identifies the people a policy structurally excludes and generates their voices. Given a policy text and demographic microdata, the system automatically discovers individuals at the boundary of eligibility---those excluded by narrow numerical margins or by gaps the law fails to anticipate---and instantiates them as LLM personas who articulate policy critiques endogenously, without researcher-injected hypotheses. Applied to South Korea's Jeonse Fraud Special Act, the framework's proposed amendment matched 5 of 8 criteria in the actual 2026 legislative revision, compared to 1 of 8 from a randomly sampled control group. These results suggest that targeted blind-spot discovery can yield more actionable legislative recommendations than untargeted simulation.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 20
Loading