Strategic A/B testing via Maximum Probability-driven Two-armed Bandit

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This paper proposes a new and more powerful approach to A/B testing based on two-armed bandit.
Abstract: Detecting a minor average treatment effect is a major challenge in large-scale applications, where even minimal improvements can have a significant economic impact. Traditional methods, reliant on normal distribution-based or expanded statistics, often fail to identify such minor effects because of their inability to handle small discrepancies with sufficient sensitivity. This work leverages a counterfactual outcome framework and proposes a maximum probability-driven two-armed bandit (TAB) process by weighting the mean volatility statistic, which controls Type I error. The implementation of permutation methods further enhances the robustness and efficacy. The established strategic central limit theorem (SCLT) demonstrates that our approach yields a more concentrated distribution under the null hypothesis and a less concentrated one under the alternative hypothesis, greatly improving statistical power. The experimental results indicate a significant improvement in the A/B testing, highlighting the potential to reduce experimental costs while maintaining high statistical power.
Lay Summary: In industry, comparing the effectiveness of two experimental strategies—such as evaluating which of two advertising policies yields higher revenue—is both common and challenging. We propose a novel test statistic based on the two-armed bandit framework, rather than relying on the Central Limit Theorem. This statistic not only offers improved statistical power but also effectively controls the Type I error rate. Our work has important implications for enhancing the sensitivity of A/B testing, reducing their duration, and lowering experimental costs.
Primary Area: General Machine Learning->Causality
Keywords: A/B testing, average treatment effect, two-armed bandit, meta-analysis
Submission Number: 3865
Loading