Keywords: Generative Agents, Commitment Devices, Cooperation, Game Theory, Multi-Agent Systems
Abstract: Large Language Model (LLM) based agents have demonstrated potential in various applications, effectively serving as proxies for human interaction in numerous tasks. Previously, the exploration of agent cooperation has been primarily confined to Multi-agent Reinforcement Learning (MARL), where commitment devices (CDs) have shown to significantly improve collaborative efforts[8]. This paper examines the effectiveness of CDs in fostering cooperative behavior among LLM agents within game-theoretic contexts. We investigate the ability of LLM agents to utilize CDs to achieve socially optimal outcomes while balancing their individual interests. Our experimental range includes various game structures, such as the classic Prisoner’s Dilemma, Public Goods games, and the more com- plex dynamic Harvest game. We introduce a framework for agents to use CDs in these games to achieve higher socially optimal outcomes. Our preliminary experiments show that in simpler game scenarios, agents successfully use CDs to reach socially optimal outcomes (the new Nash Equilibrium of the game with CDs). In more complex dynamic games, however, agents exhibit limitations in strategically applying CDs, resulting in more nuanced performance improvements. These findings suggest that while commitment devices can enhance cooperation among generative agents, further work in foundational model level improvement is necessary for optimal results in complex, realistic game scenarios.
Submission Number: 22
Loading