From Competition to Coordination Market Making as a Scalable Framework for Safe and Aligned Multi-Agent LLM Systems
Keywords: Multi-Agent LLM, AI Safety, Market Making
TL;DR: Market making is a framework where LLM agents iteratively trade probabilistic beliefs as market makers and traders, achieving improvements in collective truthfulness and reasoning.
Abstract: Large Language Models (LLMs) are beginning to collaborate, debate, and negotiate with one another, opening new paths toward collective intelligence, but also new risks in coordination, alignment, and truthfulness. In this paper, we present how market mechanisms can structure these multi agent interactions in a way that makes truthful reasoning an emergent property rather than a hand crafted rule.
We propose a market making framework where each LLM agent acts as a market maker or trader, continuously updating and exchanging probabilistic beliefs through negotiation. Instead of enforcing agreement from the top down, the system self-organizes: agents are rewarded for offering accurate and consistent information, and penalized when their beliefs fail to hold up under scrutiny. The result is a decentralized process where truth emerges from incentive alignment, not from central oversight.
Through experiments across factual reasoning, estimation, and multi-step analytical tasks, we find that market based coordination consistently improves collective truthfulness and reasoning accuracy, often by more than 10\% compared to traditional debate or majority vote frameworks. Beyond empirical gains, our findings suggest that economic principles like liquidity, price discovery, and arbitrage can serve as powerful design tools for building safer, more transparent, and more self correcting LLM societies.
Submission Number: 38
Loading