Keywords: Online Learning; Stackelberg Games; Algorithmic Game Theory
TL;DR: We develop online learning algorithms for multi-follower Bayesian Stackelberg games with unknown type distributions under multiple feedback models.
Abstract: In a multi-follower Bayesian Stackelberg game, a leader plays a mixed strategy over $L$ actions to which $n\ge 1$ followers, each having one of $K$ possible private types, best respond. The leader's optimal strategy depends on the distribution of the followers' private types.
We study an online learning problem for Bayesian Stackelberg game, where a leader interacts for $T$ rounds with $n$ followers with types sampled from an unknown distribution every round. The leader's goal is to minimize regret, defined as the difference between the cumulative utility of the optimal strategy and that of the actually chosen strategies. We design learning algorithms for the leader under different settings. Under type feedback, where the leader observes the followers' types after each round, we design algorithms that achieve $\mathcal O\big(\sqrt{\min\{L\log(nKA T), ~ nK \} \cdot T} \big)$ regret for independent type distributions and $\mathcal O\big(\sqrt{\min\{L\log(nKA T), ~ K^n \} \cdot T} \big)$ regret for general type distributions. Interestingly, these bounds do not grow with $n$ at a polynomial rate. Under action feedback, where the leader only observes the followers' actions, we design algorithms with $\mathcal O( \min\{\sqrt{ n^L K^L A^{2L} L T \log T}, ~ K^n\sqrt{ T } \log T \} )$ regret. We also provide a lower bound of $\Omega(\sqrt{\min\{L, ~ nK\}T})$, almost matching the type-feedback upper bounds.
Primary Area: learning theory
Submission Number: 15197
Loading