Abstract: We study off-policy evaluation in the setting of contextual bandits, where we aim to evaluate a new policy using historical data that consists of contexts, actions and received rewards. This historical data typically does not faithfully represent action distribution of the new policy accurately. A common approach, inverse probability weighting (IPW), adjusts for these discrepancies in action distributions.
However, this method often suffers from high variance due to the probability being in the denominator.
The doubly robust (DR) estimator reduces variance through modeling reward but does not directly address variance from IPW.
In this work, we address the limitation of IPW by proposing a Nonparametric Weighting (NW) approach that constructs weights using a nonparametric model. Our NW approach achieves low bias like IPW but typically exhibits significantly lower variance.
To further reduce variance, we incorporate reward predictions — similar to the DR technique — resulting in the Model-assisted Nonparametric Weighting (MNW) approach. We show that MNW yields accurate value estimates when either the reward model or the behavior policy model is well specified. Extensive empirical comparisons show that our approaches consistently outperform existing techniques, achieving lower variance in value estimation while maintaining low bias.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Inigo_Urteaga1
Submission Number: 6353
Loading