Balanced off-policy evaluation in general action spaces

Published: 03 Jun 2020, Last Modified: 02 Aug 2024International Conference on Artificial Intelligence and StatisticsEveryoneCC BY 4.0
Abstract: Estimation of importance sampling weights for off-policy evaluation of contextual bandits often results in imbalance—a mismatch between the desired and the actual distribution of state-action pairs after weighting. In this work we present balanced off-policy evaluation (B-OPE), a generic method for estimating weights which minimize this imbalance. Estimation of these weights reduces to a binary classification problem regardless of action type. We show that minimizing the risk of the classifier implies minimization of imbalance to the desired counterfactual distribution. In turn, this is tied to the error of the off-policy estimate, allowing for easy tuning of hyperparameters. We provide experimental evidence that B-OPE improves weighting-based approaches for offline policy evaluation in both discrete and continuous action spaces.
Loading