Measure Theory of Conditionally Independent Random Function Evaluation

TMLR Paper7468 Authors

11 Feb 2026 (modified: 23 Feb 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In sequential design strategies, common in geostatistics and Bayesian optimization, the selection of a new observation point $X_{n+1}$ of a random function $\mathbf f$ is informed by past data, captured by the filtration $\mathcal F_n=\sigma(\mathbf f(X_0),\dots,\mathbf f(X_n))$. The random nature of $X_{n+1}$ introduces measure-theoretic subtleties in deriving the conditional distribution $\mathbb P(\mathbf f(X_{n+1})\in A \mid \mathcal F_n)$. Practitioners often resort to a heuristic: treating $X_0,\dots, X_{n+1}$ as fixed parameters within the conditional probability calculation. This paper investigates the mathematical validity of this widespread practice. We construct a counterexample to prove that this approach is, in general, incorrect. We also establish our central positive result: for continuous Gaussian random functions and their canonical conditional distribution, the heuristic is sound. This provides a rigorous justification for a foundational technique in Bayesian optimization and spatial statistics. We further extend our analysis to include settings with noisy evaluations and to cases where $X_{n+1}$ is not adapted to $\mathcal F_n$ but is conditionally independent of $\mathbf f$ given the filtration.
Submission Type: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Removed keywords and MSC classification from abstract
Assigned Action Editor: ~Trevor_Campbell1
Submission Number: 7468
Loading