Keywords: Stackelberg Games, Equilibrium Computation, Learning in Games, Robust Optimization, Market Equilibrium
TL;DR: We investigate no-regret learning dynamics in min-max Stackelberg games.
Abstract: The behavior of no-regret learning algorithms is well understood in two-player min-max (i.e, zero-sum) games.
In this paper, we investigate the behavior of no-regret learning in min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., min-max Stackelberg, games. We consider two settings, one in which only the first player chooses their actions using a no-regret algorithm while the second player best responds, and one in which both players use no-regret algorithms. For the former case, we show that no-regret dynamics converge to a Stackelberg equilibrium. For the latter case, we introduce a new type of regret, which we call Lagrangian regret, and show that if both players minimize their Lagrangian regrets, then play converges to a Stackelberg equilibrium. We then observe that online mirror descent (OMD) dynamics in these two settings correspond respectively to a known nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new simultaneous GDA-like algorithm, thereby establishing convergence of these algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of OMD dynamics to perturbations by investigating dynamic min-max Stackelberg games. We prove that OMD dynamics are robust for a large class of dynamic min-max games with independent strategy sets. In the dependent case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in dynamic Fisher markets, a canonical example of a min-max Stackelberg game with dependent strategy sets.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/robust-no-regret-learning-in-min-max/code)
3 Replies
Loading