Bridging Offline and Online Experimentation: Constraint Active Search for Deployed Performance Optimization

Published: 21 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: A common challenge in machine learning model development is that models perform differently between the offline development phase and the eventual deployment phase. Fundamentally, the goal of such a model is to maximize performance during deployment, but such performance cannot be measured offline. As such, we propose to augment the standard offline sample efficient hyperparameter optimization to instead search offline for a diverse set of models which can have potentially superior online performance. To this end, we utilize Constraint Active Search to identify such a diverse set of models, and we study their online performance using a variant of Best Arm Identification to select the best model for deployment. The key contribution of this article is the theoretical analysis of the two-phase development strategy, both in analyzing the probability of improvement over the baseline as well as the number of viable treatments for online testing. We demonstrate the viability of this strategy on synthetic examples, as well as a recommendation system benchmark.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=XX8CEN815d
Changes Since Last Submission: In response to the decision (minor revision), we revised the paper. * Addressed the issue on assumption 3 (constant-ratio covering). * Rewrited the appendix section on CAS/offline development phase. * Rewrited the experimental section.
Assigned Action Editor: ~Jonathan_Scarlett1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 275
Loading