Off-Policy Evaluation with Out-of-Sample Guarantees

Published: 17 Jul 2023, Last Modified: 17 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Authors that are also TMLR Expert Reviewers: ~Fredrik_D._Johansson1
Abstract: We consider the problem of evaluating the performance of a decision policy using past observational data. The outcome of a policy is measured in terms of a loss (aka. disutility or negative reward) and the main problem is making valid inferences about its out-of-sample loss when the past data was observed under a different and possibly unknown policy. Using a sample-splitting method, we show that it is possible to draw such inferences with finite-sample coverage guarantees about the entire loss distribution, rather than just its mean. Importantly, the method takes into account model misspecifications of the past policy - including unmeasured confounding. The evaluation method can be used to certify the performance of a policy using observational data under a specified range of credible model assumptions.
Certifications: Expert Certification
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera-ready submission. We have clarified the sensitivity parameter $\Gamma$ and added some details in Section 2.
Code: https://github.com/sofiaek/off-policy-evaluation
Assigned Action Editor: ~Alain_Durmus1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 882
Loading