Keywords: coalition formation games, Bi-level reinforcement learning, multi-agent reinforcement learning
Abstract: The challenge of coalition formation games lies in efficiently navigating the exponentially large space of possible coalitions to identify the optimal partition. While existing approaches to solve coalition formation games either provide optimal solutions with limited scalability or approximate solutions without quality guarantees, we propose a novel scalable and sample-efficient approximation method based on deep reinforcement learning. Specifically, we model the coalition formation problem as a finite Markov decision process and use deep neural network to approximate optimal coalition structures within the full and abstracted coalition space. Moreover, our method is applicable to bi-level optimization problems in which coalition values are determined by the policies of individual agents at a lower decision-making level. This way, our approach facilitates dynamic, adaptive adjustments to coalition value assessments as they evolve over time. Empirical results demonstrate our algorithm's effectiveness in approximating optimal coalition structures in both normal-form and sequential mixed-motive games.
Primary Area: reinforcement learning
Submission Number: 13778
Loading