Keywords: Bayesian Optimal Experiment Design, Active Learning, Causal Inference
Abstract: Accurately estimating personalized treatment effects often demands substantial data, incurring high costs across diverse applications such as personalized advertisement delivery and clinical trials. Existing methodologies employ deep models to estimate treatment effects in high-dimensional data, often relying on randomly selected experiments. We explore the potential of active learning techniques to enhance the efficiency of experimentation. Our focus centers on a relatively underexplored yet common scenario where each unit is subject to experimentation only once. We build upon the Bayesian active learning framework, to select units, and a treatment to apply to the unit, that maximize the information gain from each experiment. Our approach is flexible, accommodating both discrete and continuous treatment settings. Furthermore, we address the inefficiencies in batch experimentation by employing a greedy and a policy gradient-based optimization strategy.
We validate the effectiveness of our proposed method on synthetic and high-dimensional semi-synthetic datasets (based on IHDP and TCGA). Our results show significant improvements in experimentation efficiency over the baseline methods.
Submission Number: 61
Loading