On the Misinformation in a Statistical Experiment
TL;DR: The classical Bayesian OED framework fails under misspecification, so we propose new axioms and information measures that enable robust inference under model and inference misspecification.
Abstract: The principle that more informative experiments are always better is a cornerstone of Bayesian experimental design. This principle assumes the practitioner's model and inference are correct. In practice, both the data-generating model and the inferential approximation are inevitably misspecified, and we show that under these conditions the classical framework for comparing experiments breaks down. Designs ranked as most informative can become actively harmful, amplifying bias to produce confident but incorrect inferences. We demonstrate that the commonly-accepted axioms of experimental utility, such as Blackwell monotonicity, fail under misspecification, and that information measures proposed to handle it, like the Expected Generalized Information Gain (EGIG), do not obey these axioms.
To resolve this, we propose a generalized axiomatic framework for robust Bayesian experimental design. We prove that EGIG satisfies our axioms as a criterion that penalizes inferential error, providing a principled foundation for its use in Bayesian experimental design. As a complementary approach, we derive a new measure that instead penalizes model error. Finally, we demonstrate our framework's utility across common modes of misspecification, showing it provides a reliable guide for experimental design where classical methods fail.
Submission Number: 1002
Loading