Keywords: multi-agent LLMs, chain-of-thought, leader elections
TL;DR: Reasoning outweighs communication clarity in multi-agent elections, producing better agendas that secure the long-term survival of societies.
Abstract: With the rapid evolution of multi-agent LLM societies—from generative-agent “towns” that simulate day-to-day social life to electoral frameworks where AI collectives debate and vote—it has become crucial to study how persuasive leader personas, voting rules, and shared-resource incentives co-evolve to shape democratic choices and the downstream distribution of communal assets. We extend the GOVSIM fishing-commons benchmark by introducing periodic leader elections, thereby tying long-horizon resource stewardship to short-horizon persuasion. Thirteen LLM agents harvest fish while voting every ten steps under five canonical rules. Each ballot pits four scripted personas that orthogonally vary reasoning transparency and communication clarity. Across five competitive 7–9 B-parameter models we observe three consistent patterns. First, leaders who expose chain-of-thought reasoning win 92 % of elections, mirroring single-agent findings on explicit reasoning. Second, only a few models maintain the commons for eight cycles, underscoring strong architectural differences. These results suggest that transparent reasoning and model architecture, rather than agent communication, are the critical levers for deploying safe, self-governing LLM societies.
Submission Number: 17
Loading