Calibrated Stackelberg Games: Learning Optimal Commitments Against Calibrated Agents

Published: 21 Sept 2023, Last Modified: 15 Jan 2024NeurIPS 2023 spotlightEveryoneRevisionsBibTeX
Keywords: calibration, Stackelberg games, learning in repeated games, strategic agents, best response, strategic classification, Stackelberg Security Games
TL;DR: We introduce and study learning in a new class of games called "Calibrated Stackelberg Games" (CSGs) where the agents can only form calibrated forecasts about the principal's strategies (as opposed to having full knowledge of them).
Abstract: In this paper, we introduce a generalization of the standard Stackelberg Games (SGs) framework: _Calibrated Stackelberg Games_. In CSGs, a principal repeatedly interacts with an agent who (contrary to standard SGs) does not have direct access to the principal's action but instead best responds to _calibrated forecasts_ about it. CSG is a powerful modeling tool that goes beyond assuming that agents use ad hoc and highly specified algorithms for interacting in strategic settings to infer the principal's actions and thus more robustly addresses real-life applications that SGs were originally intended to capture. Along with CSGs, we also introduce a stronger notion of calibration, termed _adaptive calibration_, that provides fine-grained any-time calibration guarantees against adversarial sequences. We give a general approach for obtaining adaptive calibration algorithms and specialize them for finite CSGs. In our main technical result, we show that in CSGs, the principal can achieve utility that converges to the optimum Stackelberg value of the game both in _finite_ and _continuous_ settings and that no higher utility is achievable. Two prominent and immediate applications of our results are the settings of learning in Stackelberg Security Games and strategic classification, both against _calibrated_ agents.
Supplementary Material: pdf
Submission Number: 8000