Keywords: Collusion; LLM-powered AI Agents; Responsible AI
TL;DR: LLM-powered AI agents are increasingly displaying collusive behaviors in competitive markets, raising challenges for existing regulations, which this survey explores through theoretical, empirical, and policy lenses.
Abstract: The rapid growth of deployment of large language models (LLMs)-powered AI agents has been emerging in competitive markets in recent years. However, they have begun to exhibit collusive behaviors, that pose significant challenges by potentially circumventing existing regulatory frameworks. To address these challenges, this survey outlines the theoretical and empirical literature as well as the policy implications associated with algorithmic collusion among competing LLM-powered agents across diverse market environments. In our analysis, we consider three fundamental collusion strategies: tacit coordination through behavioral learning, the construction of natural language cartels, and concealed steganographic collaboration. Each strategy provides intuitive insights into the mechanisms underlying collusive behavior. Following analysis of collusion strategies, the survey highlights three key research priorities: (1) developing robust detection methods to distinguish collusion from legitimate cooperation, (2) designing verifiably competitive agent architectures, and (3) formulating legal frameworks that ensure the accountability of autonomous systems. This study aims to highlight the problem of collusion and evaluate proposed measures to address shortcomings in anticipated regulatory responses, with a focus on mitigation strategies through design principles, architectural safeguards, and innovative regulatory frameworks.
Submission Number: 96
Loading