Abstract: In multiagent reinforcement learning scenarios, it is
often the case that independent agents must jointly
learn to perform a cooperative task. This paper focuses on such a scenario in which agents have individual preferences regarding how to accomplish the
shared task. We consider a framework for this setting which balances individual preferences against
task rewards using a linear mixing scheme. In our
theoretical analysis we establish that agents can
reach an equilibrium that leads to optimal shared
task reward even when they consider individual
preferences which are not fully aligned with this
task. We then empirically show, somewhat counterintuitively, that there exist mixing schemes that outperform a purely task-oriented baseline. We further
consider empirically how to optimize the mixing
scheme.
0 Replies
Loading