Efficient Automated Online Experimentation with Multi-FidelityDownload PDF

Published: 10 Dec 2021, Last Modified: 05 May 2023NeurIPS 2021 Workshop MetaLearn PosterReaders: Everyone
Keywords: online experimentation, bayesian optimization, gaussian processes, A/B testing, multi-fidelity optimization
TL;DR: We introduce a method for automated online experimentation that leverages advances from multi-fidelity optimization in order to efficiently select the best production system to deploy to users.
Abstract: Prominent online experimentation approaches in industry, such as A/B testing, are often not scalable with respect to the number of candidate models. To address this shortcoming, recent work has introduced an automated online experimentation (AOE) scheme that uses a probabilistic model of user behavior to predict online performance of candidate models. While effective, these predictions of online performance may be biased due to various unforeseen circumstances, such as user modelling bias, a shift in data distribution or an incomplete set of features. In this work, we leverage advances from multi-fidelity optimization in order to combine AOE with Bayesian optimization (BO). This mitigates the effect of biased predictions, while still retaining scalability and performance. Furthermore, our approach also allows us to optimally adjust the number of users in a test cell, which is typically kept constant for online experimentation schemes, leading to a more effective allocation of resources. Our synthetic experiments show that our method yields improved performance, when compared to AOE, BO and other baseline approaches.
Contribution Process Agreement: Yes
Poster Session Selection: Poster session #1 (12:00 UTC)
0 Replies

Loading