Abstract: We study off-policy evaluation in the setting of contextual bandits, where we aim to evaluate a new policy using historical data that consists of contexts, actions and received rewards. This historical data typically does not faithfully represent action distribution of the new policy accurately. A common approach, inverse probability weighting (IPW), adjusts for these discrepancies in action distributions. However, this method often suffers from high variance due to the probability being in the denominator. The doubly robust (DR) estimator reduces variance through modeling reward but does not directly address variance from IPW. In this work, we address the limitation of IPW by proposing a Nonparametric Weighting (NW) approach that constructs weights using a nonparametric model. Our NW approach achieves low bias like IPW but typically exhibits significantly lower variance. To further reduce variance, we incorporate reward predictions -- similar to the DR technique -- resulting in the Model-assisted Nonparametric Weighting (MNW) approach. The MNW approach yields accurate value estimates by explicitly modeling and mitigating bias from reward modeling, without aiming to guarantee the standard doubly robust property. Extensive empirical comparisons show that our approaches consistently outperform existing techniques, achieving lower variance in value estimation while maintaining low bias.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Thank you very much for the helpful and constructive comments from the three reviewers, as well as for the AE’s contribution. I have prepared the camera-ready revision. Compared with the previous revised version, the manuscript has been further polished.
Below I summarize the main changes from the original submission to the revised version.
* On the motivation
I presented the representation result for off-policy evaluation in Section 3.1.1. Specifically, define
\begin{equation}
f^{\pi}(p_{ia})=\mathbb{E}[\pi_{ia}r_{ia} | p_{ia}].
\end{equation}
Based on this definition, the following representation results are obtained. Using the definition of $f^{\pi}(p_{ia})$ and the law of total expectation, I derive
\begin{equation}
V^{\pi} = \mathbb{E}[p_{ia_i}^{-1}f^{\pi}(p_{ia_i})];
\end{equation}
and
\begin{equation}
V^{\pi}= \mathbb{E}\left[f^{\pi}(p_{ia})\right].
\end{equation}
The first result shows that $f^{\pi}(\cdot)$ admit a design-based representation analogous to the IPW estimator, while the second one shows that it also admit a model-based representation analogous to the DM estimator.
If instead define $f^{r}(p_{ia})=\mathbb{E}[r_{ia} | p_{ia}]$, the resulting quantity $\pi_{ia}f^{r}(\cdot)$ does not inherit these representation properties.
* On the model framework
I provided an additional justification for why the dataset suffices to recover the representation of $f^{\pi}(\cdot)$ in Section 3.1.2.
In the off-policy evaluation problem, the data are assumed to be collected under a mechanism in which the action assignment $a_i$ for unit $i$ relies only on $p_{ia}$. That is, conditional on $p_{ia}$, the action assignment $a_i$ is independent of $\pi_{ia}r_{ia}$.
Therefore, the representation result of $f^{\pi}(\cdot)$ leads to a modeling framework that links $\pi_{ia}r_{ia}$ to $p_{ia}$ using the data set.
* On the illustrative example
I conducted a toy simulation to illustrate the potential advantage of the MNW estimator over the NW estimator in Section 4.2.
* On Experiments in Section 5.
I conducted an experiment in which the logging policy is estimated; the corresponding performance results are reported in Table 5 in the appendix. The results show that our approaches perform consistently across different settings.
I examined performance under varying sample sizes and report results for a representative dataset, Page, in Figure 1 in the appendix. These results demonstrate that the proposed approaches are robust to the sample size used for policy evaluation.
I added more details to replicate them. For example, a summary of the datasets used for policy evaluation is provided.
* On robustness to the estimation of $p_{ia}$.
I provided additional discussion on robustness to behavior policy estimation in Section 3.4, where the impact of estimation error and the impact of bias arising from model misspecification are discussed.
* On explanatory power of $f(p_{ia})$ for $\pi_{ia}r_{ia}$
I included a toy example illustrating that the relationship between $\pi_{ia}r_{ia}$ with the exploration policy $p_{ia}$ conforms to the function form of $f()$.
Code: https://github.com/rong-zhu/NW-OPE
Assigned Action Editor: ~Inigo_Urteaga1
Submission Number: 6353
Loading