Representative, Informative, and De-Amplifying: Requirements for Robust Bayesian Active Learning under Model Misspecification
TL;DR: In Bayesian experimental design, we realise model misspecification manifests in a term we call error (de-)amplification, which opens doors to de-amplification which increases robustness to model misspecification.
Abstract: In many settings in science and industry, such as drug discovery and clinical trials, a central challenge is designing experiments under time and budget constraints.
*Bayesian Optimal Experimental Design (BOED)* is a paradigm to pick maximally informative designs that has been increasingly applied to such problems. During training, BOED selects inputs according to a pre-determined acquisition criterion to target *informativeness*. During testing, the model learned during training encounters a naturally occurring distribution of test samples. This leads to an instance of covariate shift, where the train and test samples are drawn from different distributions (the training samples are not *representative* of the test distribution).
Prior work has shown that in the presence of model misspecification, covariate shift amplifies generalization error.
Our first contribution is to provide a mathematical analysis of generalization error that reveals key contributors to generalization error in the presence of model misspecification. We show that generalization error under misspecification is the result of, in addition to covariate shift, a phenomenon we term *error (de-)amplification* which has not been identified or studied in prior work.
We then develop a new acquisition function that mitigates the effects of model misspecification by including terms for representativeness, informativeness, and de-amplification (R-IDeA).
Our experimental results demonstrate that the proposed method performs better than methods that target either only informativeness, representativeness, or both.
Submission Number: 1348
Loading