Personalized Incentive Alignment: Correcting Utility-Driven Selection Bias in A/B Tests
TL;DR: Characterization of selection bias in causal inference and optimal incentivize mechanism design to mitigate selection bias.
Abstract: Although A/B testing is a powerful tool for estimating the average treatment effect (ATE), it often proves
impractical in social or commercial settings because ethical and business constraints induce participant
non-compliance. For example, patients may refuse assignment to less promising therapies, and users may
choose whether to adopt a newly released feature based on personal preferences. In this work, we posit that
participants act to maximize individual incentives. To capture this behavior, we adopt a utility-based random
choice model that explicitly characterizes the identification bias introduced by self-selection and the estimation
instability caused by feature imbalance. We then demonstrate how heterogeneous incentives generate both
selection bias and inflated variance. Building on these insights, we design an optimal incentive mechanism
that equalizes preference distributions across treatment arms, thereby achieving a more balanced covariate
profile, lower variance, and a sharper identified set with minimal bias. Finally, we propose an online learning
framework that adaptively identifies the optimal incentive scheme during the experiment and produces valid
treatment-effect estimates. We validate our theoretical results through both simulation studies and field
experiments.
Submission Number: 1319
Loading