Fast Rate Bounds for Multi-Task and Meta-Learning with Different Sample Sizes

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multitask learning, generalization bounds, PAC-Bayes, fast-rate, unbalanced
TL;DR: We study fast-rate generalization bounds for multi-task and meta-learning with unbalanced task sizes, showing key differences with the balanced setting and provide new results for the unbalanced setting.
Abstract: We present new fast-rate PAC-Bayesian generalization bounds for multi-task and meta-learning in the unbalanced setting, i.e. when the tasks have training sets of different sizes, as is typically the case in real-world scenarios. Previously, only standard-rate bounds were known for this situation, while fast-rate bounds were limited to the setting where all training sets are of equal size. Our new bounds are numerically computable as well as interpretable, and we demonstrate their flexibility in handling a number of cases where they give stronger guarantees than previous bounds. Besides the bounds themselves, we also make conceptual contributions: we demonstrate that the unbalanced multi-task setting has different statistical properties than the balanced situation, specifically that proofs from the balanced situation do not carry over to the unbalanced setting. Additionally, we shed light on the fact that the unbalanced situation allows two meaningful definitions of multi-task risk, depending on whether all tasks should be considered equally important or if sample-rich tasks should receive more weight than sample-poor ones.
Primary Area: Theory (e.g., control theory, learning theory, algorithmic game theory)
Submission Number: 9196
Loading