PACOH: Bayes-Optimal Meta-Learning with PAC-GuaranteesDownload PDF

12 Jun 2020 (modified: 22 Oct 2023)LifelongML@ICML2020Readers: Everyone
Student First Author: Yes
TL;DR: A novel class of meta-Learning algorithms based on PAC-Bayesian learning theory.
Keywords: meta-learning, multi-task Learning
Abstract: Meta-learning can successfully acquire useful inductive biases from data, especially when a large number of meta-tasks are available. Yet, its generalization properties to unseen tasks are poorly understood. Particularly if the number of meta-tasks is small, this raises concerns for potential overfitting. We provide a theoretical analysis using the PAC-Bayesian framework and derive novel generalization bounds for meta-learning with unbounded loss functions and Bayesian base learners. Using these bounds, we develop a class of PAC-optimal meta-learning algorithms with performance guarantees and a principled meta-regularization. When instantiating our PAC-optimal hyper-posterior (PACOH) with Gaussian processes as base learners, the resulting approach consistently outperforms several popular meta-learning methods, both in terms of predictive accuracy and the quality of its uncertainty estimates.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2002.05551/code)
0 Replies

Loading