Learning to Negotiate via Voluntary Commitment

Published: 23 Jun 2025, Last Modified: 24 Jul 2025CoCoMARL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Cooperative AI, Multi-agent Reinforcement Learning, Commitment Games, AI Alignment, Multi-agent System, Negotiation
TL;DR: This paper introduces a learnable commitment protocol that enables self-interested agents to negotiate, form binding agreements, and voluntarily cooperate in high-conflict environments without relying on central control or altruism.
Abstract: The partial alignment and conflict of autonomous agents lead to mixed-motive scenarios in many real-world applications. However, agents may fail to cooperate in practice even when cooperation yields a better outcome. One well known reason for this failure comes from non-credible commitments. To facilitate commitments among agents for better cooperation, we define Markov Commitment Games (MCGs), a variant of commitment games, where agents can voluntarily commit to their proposed future plans. Based on MCGs, we propose a learnable commitment protocol via policy gradients. We further propose incentive-compatible learning to accelerate convergence to equilibria with better social welfare. Experimental results in challenging mixed-motive tasks demonstrate faster empirical convergence and higher returns for our method compared with its counterparts. Our code is available at \url{https://github.com/shuhui-zhu/DCL}.
Submission Number: 17
Loading