Keywords: Stackelberg games; Bandits; Online learning;
TL;DR: We study an online learning problem in repeated general-sum Stackelberg games, where players act in a "decentralized" and "strategic" manner.
Abstract: We study an online learning problem in general-sum Stackelberg games, where players act in a decentralized and strategic manner. We study two settings depending on the type of information for the follower: (1) the $\textit{limited information}$ setting where the follower only observes its own reward, and (2) the $\textit{side information}$ setting where the follower has extra side information about the leader's reward. We show that for the follower, myopically best responding to the leader's action is the best strategy for the limited information setting, but not necessarily so for the side information setting -- the follower can manipulate the leader's reward signals with strategic actions, and hence induce the leader's strategy to converge to an equilibrium that is better off for itself. Based on these insights, we study decentralized online learning for both players in the two settings. Our main contribution is to derive $\textit{last iterate}$ convergence and sample complexity results in both settings. Notably, we design a new manipulation strategy for the follower in the latter setting, and show that it has an intrinsic advantage against the best response strategy. Our theories are also supported by empirical results.
List Of Authors: Yaolong, Yu and Haipeng, Chen
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 19
Loading