Uncertainty-Aware Meta-Learning with Analytically Tractable Posterior

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a Bayesian linearized neural network approach for meta-learning regression tasks that (1) adapts efficiently in very low-data regime, (2) detects out-of-distribution tasks, and (3) handles multimodal task distributions effectively.
Abstract: Meta-learning is a popular approach for learning new tasks with limited data by leveraging the commonalities among different tasks. However, meta-learned models can perform poorly when context data is too limited, or when data is drawn from an out-of-distribution (OoD) task. Especially in safety-critical settings, this necessitates an uncertainty-aware approach to meta-learning. In addition, the often multimodal nature of task distributions can pose unique challenges to meta-learning methods. In this work, we present UNLIMITED, a meta-learning method that (1) makes probabilistic predictions on in-distribution tasks efficiently, (2) is capable of detecting OoD context data, and (3) handles heterogeneous, multimodal task distributions effectively. The strength of our framework lies in its solid theoretical basis, enabling exact Bayesian inference for principled uncertainty estimation and robust generalization. We achieve this by adopting a probabilistic perspective and training a parametric, tunable task distribution via Bayesian inference on a linearized neural network, leveraging Gaussian process theory. Moreover, we make our approach computationally tractable by leveraging a low-rank prior covariance learning scheme based on the Fisher Information Matrix. Our numerical analysis demonstrates that UNLIMITED quickly adapts to new tasks and remains accurate even in low-data regimes, it effectively detects OoD tasks, and that both of these properties continue to hold for multimodal task distributions.
Submission Number: 641
Loading