Emergent Collusion in LLM-Powered Multi-Agent Markets: A Comprehensive Survey of Risks, Mechanisms, Governance, and Regulatory Challenges

Published: 20 Nov 2025, Last Modified: 09 Mar 2026AAAI 2026 TrustAgent Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Collusion; LLM-powered AI Agents; Responsible AI; AI Safety
TL;DR: LLM-powered AI agents are increasingly displaying collusive behaviors in competitive markets, raising challenges for existing regulations, which this survey explores through theoretical, empirical, and policy lenses.
Abstract: We are facing complications in measuring market efficiency and regulatory challenges as today’s competitive markets are disrupted by sophisticated large-language-model (LLM)-powered autonomous agents. Although these agents are not explicitly programmed for collusion, a concerning tendency toward such behavior has been documented. Their intrinsic reward-maximizing incentives can inadvertently cause forms of coordination that circumvent conventional antitrust regulatory frameworks. Thus, on one hand, competition is unfairly compromised, while on the other hand, the existing regulatory mechanisms are challenged due to the high risk of algorithmic collusion in these markets. In this study, we survey the emerging phenomenon of collusion to provide a systematic analysis of the empirical evidence associated with collusive behaviors among competing LLM-powered agents across diverse markets. Moreover, we organize our analysis based on three scientific and regulatory pillars. First, we depict the theoretical and empirical risks of game-theoretic principles and Multi-Agent Reinforcement Learning (MARL) dynamics for collusive behaviors. Second, we elaborate the sophisticated mechanisms of collusion characterized by three primary LLM-enabled strategies: tacit coordination emerging from complex behavioral learning, explicit natural-language cartels, and covert steganographic collaboration. Third, we examine the fundamental governance and regulatory challenges inherent in LLM opacity, restrictions of current antitrust law with regard to intent, and difficulties in detection and monitoring. To address this threat, we propose three research priorities: (1) developing robust, interpretable detection methodologies that can distinguish legitimate cooperation from illicit coordination; (2) designing verifiably competitive agent architectures through constrained objective functions and transparent communication protocols; and (3) addressing crucial gaps in existing antitrust frameworks, especially the establishment of intent and agreement challenges.
Submission Number: 70
Loading