Keywords: Multi-armed bandits, Interactive proofs, Normal-form games
TL;DR: We show how to efficiently verify approximate optimaility of smooth policies and strategies in bandits and games
Abstract: We study protocols for verifying approximate optimality of strategies in multi-armed bandits and normal-form games. As the number of actions available to each player is often large, we seek protocols where the number of queries to the utility oracle is sublinear in the number of actions. We prove that such verification is possible for sufficiently smooth strategies that do not put too much probability mass on any specific action and provide protocols for verifying that a smooth policy for a multi-armed bandit is close to optimal. Our verification protocols require provably fewer arm queries than learning. Furthermore, we show how to use cryptographic tools to reduce the communication cost of our protocols. We complement our protocol by proving a nearly tight lower bound on the query complexity of verification in our settings. As an application, we use our bandit verification protocol to build a protocol for verifying approximate optimality of a strong smooth Nash equilibrium, with sublinear query complexity.
Supplementary Material:  zip
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 9270
Loading