TL;DR: We observe a transition between transformers that learn a specialized solution and a generalizing solution when trained to do in-context learning of linear functions with varying task diversity.
Abstract: In-context learning (ICL) is a remarkable capability of pretrained transformers that allows models to generalize to unseen tasks after seeing only a few examples. We investigate empirically the conditions necessary on the pretraining distribution for ICL to emerge and generalize \emph{out-of-distribution}. Previous work has focused on the number of distinct tasks necessary in the pretraining dataset. Here, we use a different notion of task diversity to study the emergence of ICL in transformers trained on linear functions. We find that as task diversity increases, transformers undergo a transition from a specialized solution, which exhibits ICL only within the pretraining task distribution, to a solution which generalizes out of distribution to the entire task space. We also investigate the nature of the solutions learned by the transformer on both sides of the transition, and observe similar transitions in nonlinear regression problems. We construct a phase diagram to characterize how our concept of task diversity interacts with the number of pretraining tasks. In addition, we explore how factors such as the depth of the model and the dimensionality of the regression problem influence the transition.
Lay Summary: Modern machine learning methods, including transformers, often display a capability called in-context learning (ICL). ICL is when a machine learning model learns to perform a new task by simply seeing a few examples of that task within the instructions you give it, without needing to be completely retrained. This capability makes AI models much more flexible and efficient, because completely retraining an AI from scratch is often costly and time-consuming.
One natural scientific question is to ask what the conditions are in order for ICL to appear. In order to answer this, it helps to simplify the tasks being considered by the model -- following earlier work, we use a mathematically simple task based on linear regression. In this simple setting, we investigate the conditions under which models are able to learn in-context based on what tasks they see during their initial training period. While earlier work demonstrated that the model must see a sufficient number of tasks during the initial training, we show that models must also see tasks that are sufficiently diverse: the tasks must be different enough from each other. Importantly, this diversity between tasks allows the model to perform well not only on tasks similar to the training tasks, but also on tasks that are "far away" from those tasks the model has already seen.
Our work is an important step in understanding *how* artificial intelligence methods work — understanding how these systems operate can help build trust in AI and ensure AI safety.
Primary Area: Theory->Domain Adaptation and Transfer Learning
Keywords: In-context learning, Generalization, OOD, out-of-distribution, Machine Learning, ICL
Submission Number: 13899
Loading