Keywords: Mixed-Motive Games, Networked Multi-Agent Systems, Social Network, Decentralized Training Decentralized Execution, Adaptation
Abstract: Reputation, the aggregation of peer assessments diffused through social networks, is a pivotal mechanism for promoting cooperation in social dilemmas ubiquitous to distributed multi-agent systems comprising agents with limited perception and cognitive capabilities.
Exploring efficient reputation systems, comprising reputation assessment rules and reputation-based policies, is a long-standing challenge.
Previous work assumes predefined reputation assessment rules or models reputation as an intrinsic reward to learn policies, compromising the methods' ability for generalization and adaptation.
To address this, we propose a distributed multi-agent reinforcement learning method $\textbf{COOPER}$ ($\textbf{COOP}$eration with $\textbf{E}$mergent $\textbf{R}$eputation), which jointly learns reputation assessment rules and reputation-based policies entirely from environment rewards.
Notably, leveraging the underlying mechanisms of reputation, we deliberately design the constituent modules of $\textbf{COOPER}$ and the data flows among them, overcoming the latency and noise in the feedback signal, caused by the deep entanglement between reputation and policy.
Experiments on the donation game and the coin game in grid world environments demonstrate that $\textbf{COOPER}$ effectively adapts to various existing reputation systems and co-players.
Furthermore, we observe the co-emergence of reputation norms and cooperation in self-play settings.
These results hold robustly across diverse social network topologies, underscoring the generalizability and efficacy of our approach.
Primary Area: reinforcement learning
Submission Number: 4972
Loading