Keywords: Learning an equilibrium; Social games; Bandit feedback
TL;DR: We investigate the problem of learning an equilibrium from bandit feedback in a generalized two-sided matching market, where agents can adaptively choose their actions based on their assigned matches.
Abstract: We investigate the problem of learning an equilibrium in a generalized two-sided matching market, where agents can adaptively choose their actions based on their assigned matches. Specifically, we consider a setting in which matched agents engage in a zero-sum game with initially unknown payoff matrices, and we explore whether a centralized procedure can learn an equilibrium from bandit feedback. We adopt the solution concept of **matching equilibrium**, where a pair consisting of a matching $m$ and a set of agent strategies $ X $ forms an equilibrium if no agent has the incentive to deviate from $( m, X) $. To measure the deviation of a given pair $(m, X)$ from the equilibrium pair $(m^\star, X^\star)$, we introduce **matching instability** that can serve as a regret measure for the corresponding learning problem. We then propose a UCB algorithm in which agents form preferences and select actions based on optimistic estimates of the game payoffs, and prove that it achieves sublinear, instance-independent regret over a time horizon $T$.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~andreas_athanasopoulos1, ~Christos_Dimitrakakis1
Track: Regular Track: unpublished work
Submission Number: 111
Loading