Keywords: amortization, in-context learning, meta learning, learned optimizers, stochastic optimization
TL;DR: We introduce a unified framework for amortized inference and subsequently propose an iterative refinement extension inspired from stochastic optimization to better leverage larger datasets.
Abstract: Modern learning systems increasingly rely on amortized learning — the idea of reusing computation or inductive biases shared across tasks to enable rapid generalization to novel problems. This principle spans meta-learning, in-context learning, prompt tuning, learned optimizers and more. While motivated by similar goals, these approaches differ in how they encode and leverage task-specific information. In this work, we propose a unified framework describing how such methods differ primarily in the aspects of learning they amortize — initializations, learned updates, or predictive mappings. We introduce a taxonomy that categorizes amortized models into parametric, implicit, and explicit regimes, based on whether task adaptation is externalized, internalized, or jointly modeled. Building on this view, we identify a key limitation in current approaches: most methods struggle to scale to large datasets because their capacity to process task data at test time (e.g., context size in ICL) is often limited. We propose iterative amortized inference, a class of models that refine solutions step-by-step over mini-batches, drawing inspiration from stochastic optimization and yielding performance improvements across different amortization regimes.
Submission Number: 75
Loading