Evidence from the Synthetic Laboratory: Language Models as Auction Participants

24 Sept 2024 (modified: 04 Dec 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language Model, Auction, Behavioral Economics, learning
TL;DR: LLMs can substitute for humans in classic auctions, even when compared to evidence where human behavioral traits matter.
Abstract:

This paper investigates the behavior of simulated AI agents (large language mod- els, or LLMs) in auctions, validating a novel synthetic data-generating process to help discipline the study and design of auctions. We begin by benchmarking these LLM agents against established experimental results that study agreement or departure between realized economic behavior and predictions from theory; i.e., revenue equivalence between first-price and second-price auctions and improved play in obviously strategy-proof auctions. We find that when LLM-based agents diverge from the predictions of theory, they do so in a way that agrees with behav- ioral traits observed in the existing experimental economics literature (e.g., risk aversion, and weak play in ‘complicated’ auctions). Our results also suggest that LLMs are bad at playing auctions ‘out of the box’ but can improve their play when given the opportunity to learn. This learning is robust to various prompt specifi- cations and holds across a variety of settings. We run 2,000+ auctions for less than $250 with GPT-4o and GPT-4, and develop a framework flexible enough to run auction experiments with any LLM model and a wide range of auction design specifications.

Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3377
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview