Teamwork Reinforcement Learning with Concave UtilitiesDownload PDF

Published: 25 Apr 2022, Last Modified: 05 May 2023ICLR 2022 Workshop on Gamification and Multiagent SolutionsReaders: Everyone
Keywords: reinforcement learning, duality, multiagent, coordination, reward sharing, planning, learning
TL;DR: Given a state space partitioning, we develop a multiagent reward sharing approach (via convex duality) that maximizes a concave team utility.
Abstract: Complex reinforcement learning (RL) tasks often require a divide-and-conquer approach, where a large task is divided into pieces and solved by individual agents. In this paper, we study a teamwork RL setting where individual agents make decisions on disjoint subsets (blocks) of the state space and have private interests (reward functions), while the entire team aims to maximize a general long-term team utility function and may be subject to constraints. This team utility, which is not necessarily a cumulative sum of rewards, is modeled as a nonlinear function of the team's joint state-action occupancy distribution. By leveraging the inherent duality of policy optimization, we propose a min-max multi-block policy optimization framework to decompose the overall problem into individual local tasks. This enables a federated teamwork mechanism where a team lead coordinates individual agents via reward shaping, and each agent solves her local task defined only on their local state subset. We analyze the convergence of this teamwork policy optimization mechanism and establish an $O(1/T)$ convergence rate to the team's joint optimum. This mechanism allows team members to jointly find the global socially optimal policy while keeping their local privacy.
1 Reply

Loading