Learning from Synthetic Labs: Language Models as Auction Participants

ICLR 2026 Conference Submission19097 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: OSP, Auction, proxy, mechanism design, Language Model
Abstract: This paper investigates the behavior of simulated AI agents (large language mod- els, or LLMs) in auctions, introducing a novel synthetic data-generating process to help facilitate the study and design of auctions. We find that LLMs reproduce well-known findings from experimental literature in auctions across a variety of classic auction formats. In particular, we find that LLM bidders produce results consistent with risk-averse human bidders; that they perform closer to theoret- ical predictions in obviously strategy-proof auctions; and, that in a real-world eBay-style setting, LLMs strategically produce end-of-auction “sniping” behav- ior. On prompting, we find that LLMs are robust to naive changes in prompts (e.g., language, currency) but can improve dramatically towards theoretical pre- dictions with the right mental model (i.e., the language of Nash deviations). We run 1,000+ auctions for less than $400 with GPT-4o models (three orders of mag- nitude cheaper than modern auction experiments) and develop a framework flexi- ble enough to run auction experiments with any LLM model and a wide range of auction design specifications, facilitating further experimental study by decreasing costs and serving as a proof-of-concept for the use of LLM proxies.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 19097
Loading