A Coalitional Markov Decision Process Model for Dynamic Coalition Formation among Agents

Published: 01 Jan 2020, Last Modified: 12 Jun 2024WI/IAT 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In multi-agent field, most studies on coalition formation problems assume static environments, but in real-world scenarios coalition formation problems can occur in dynamic environments. This creates the dynamic coalition formation problem where the coalition structure might change with time; our response is to propose a coalitional Markov decision process (CMDP). In CMDP, the dynamic process is modeled as a MDP where the agents observe the current state to formulate some coalitions and each coalition represents a unit that can take action to impact the environment. The environment probabilistically transfers to the next state to repeat the process. However, changing the coalition structure in the midst of MDP transitions incurs a cost that hinders the use of the classical algorithms invoked to solve MDP (e.g. Q-learning) to solve CMDP. Thus, we propose a novel algorithm coalitional Q-learning to solve CMDP and prove it can guarantee the convergence on optimal policies in CMDP; Furthermore, we apply the proposed algorithm to a dynamic formation problem in edge computing to guide edge servers to cooperatively perform tasks and thus verify the algorithm's effectiveness.
Loading