Off-Policy Evaluation for Large Action Spaces via Policy Convolution

Published: 23 Jan 2024, Last Modified: 23 May 2024TheWebConf24EveryoneRevisionsBibTeX
Keywords: Off-policy Evaluation, Counterfactual Estimation, Contextual Bandits
TL;DR: We propose a novel estimator that strategically convolves the logging and evaluation policies using action embeddings for better off-policy evaluation.
Abstract: Developing accurate off-policy estimators is crucial for both evaluating and optimizing for new policies. The main challenge behind the off-policy estimators is the distribution shift between the logging policy which generates the data and the target policy that we aim to evaluate. Typically, techniques for correcting distribution shifts involve some form of importance sampling. This approach results in unbiased value estimation but often comes with the trade-off of high variance, even in the simpler case of one-step contextual bandits. Furthermore, importance sampling relies on the common support assumption, which becomes impractical when the action space is large. To address these challenges, we introduce the Policy Convolution (PC) estimator. This method leverages latent structure within actions—made available through action embeddings—to strategically convolve the logging and target policies. This convolution introduces a unique bias-variance trade-off, which can be controlled by adjusting the amount of convolution. Our experiments on synthetic and real-world benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC, especially when either the action space or policy mismatch becomes large, with gains of up to 5 − 6 orders of magnitude over existing estimators.
Track: User Modeling and Recommendation
Submission Guidelines Scope: Yes
Submission Guidelines Blind: Yes
Submission Guidelines Format: Yes
Submission Guidelines Limit: Yes
Submission Guidelines Authorship: Yes
Student Author: Yes
Submission Number: 1159
Loading