Problem-Parameter-Agnostic MAML

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Meta learning; MAML; Tuning-free
Abstract: Meta-learning aims to equip artificial intelligence systems with the ability to learn how to learn. Among its methods Model-Agnostic Meta-Learning (MAML) is particularly effective for enabling rapid task adaptation. However, vanilla MAML suffers from a critical drawback: its performance is highly sensitive to carefully tuned hyperparameters, especially learning rates. Since theoretically these learning rates depend on problem-specific factors (e.g., task heterogeneity and loss smoothness) that are typically unknown, this reliance hinders training stability and limits adaptation performance. To address this challenge, we propose TFMAML, a tuning-free MAML algorithm that integrate adaptive stepsize and momentum techniques. TFMAML offers two key advantages: (i) it eliminates dependence on problem-specific parameters, allowing stepsizes to be pre-set without costly manual tuning or additional training process; ii) it guarantees convergence, unlike vanilla MAML, which lacks convergence guarantees to first-order stationary points. We provide rigorous theoretical analysis showing that TFMAML achieves the state-of-the-art convergence rate of $\mathcal{O}(\epsilon^{-4})$ to reach FOSP. Furthermore, we prove that the its first-order variant, TFFOMAML, avoids Hessian computations while retaining the same $\mathcal{O}(\epsilon^{-4})$ convergence rate. Unlike standard First-Order MAML, which suffers from a constant error floor, TFFOMAML eliminates this bias and converges reliably to stationary points. Extensive experiments validate our theory, demonstrating the clear superiority of TFMAML and TFFOMAML over existing benchmarks.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 10838
Loading