Q-learners Can Provably Collude in the Iterated Prisoner's Dilemma

Published: 01 Jan 2023, Last Modified: 30 Sept 2024CoRR 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The deployment of machine learning systems in the market economy has triggered academic and institutional fears over potential tacit collusion between fully automated agents. Multiple recent economics studies have empirically shown the emergence of collusive strategies from agents guided by machine learning algorithms. In this work, we prove that multi-agent Q-learners playing the iterated prisoner's dilemma can learn to collude. The complexity of the cooperative multi-agent setting yields multiple fixed-point policies for $Q$-learning: the main technical contribution of this work is to characterize the convergence towards a specific cooperative policy. More precisely, in the iterated prisoner's dilemma, we show that with optimistic Q-values, any self-play Q-learner can provably learn a cooperative policy called Pavlov, also referred to as win-stay, lose-switch policy, which strongly differs from the vanilla Pareto-dominated always defect policy.
Loading