Universal Algorithm-Implicit Learning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Meta-Learning, Few-Shot Learning, Universal Learning
Abstract: Current meta-learning methods are constrained to narrow task distributions with fixed feature and label spaces, limiting applicability. We present TAIL, a novel algorithm-implicit meta-learner that functions across tasks with varying domains, modalities, and label configurations. Out approach reformulates the few-shot learning problem as a sequence modeling problem. We train a non-causal transformer on sequences of data-label-pairs and and unlabeled query sample, to directly predict the label of the query sample. This causes the transformer to learn an implicit learning algorithm, which enables it to learn new concepts at test time without fine-tuning. Empirically, TAIL achieves state-of-the-art performance on standard benchmarks while generalizing to unseen domains and modalities. Unlike other meta-learning methods, it sustains strong performance on tasks with up to 20 times more classes than in training while providing orders of magnitude computational savings. Moreover, we introduce a theoretical framework for meta-learning, which allows us to formally describe important properties of meta-learning paradigms.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8255
Loading