Input-Adaptive Bayesian Model Averaging

ICLR 2026 Conference Submission22670 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Model averaging, Variational inference, Adaptivity
TL;DR: We propose a Bayesian framework for adaptive model averaging, casting it as inference with an input-adaptive prior and amortized variational posterior, enabling input-specific weighting
Abstract: This paper addresses prediction problems with multiple candidate models, where the goal is to combine their outputs. This task is especially challenging in heterogeneous settings, where different models may be better suited to different inputs. We propose Input-Adaptive Bayesian Model Averaging (IABMA), a Bayesian method that assigns model weights conditional on the input. IABMA employs an input-adaptive prior, and yields a posterior distribution that adapts to each prediction, which we estimate via amortized variational inference. We derive formal guarantees for its performance relative to any single predictor selected per input, and evaluate IABMA across regression and classification tasks, studying data from personalized cancer treatment, credit-card fraud detection, and UCI datasets. IABMA consistently delivers more accurate and better-calibrated predictions than both non-adaptive baselines and existing adaptive methods.
Supplementary Material: zip
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 22670
Loading