TL;DR: We theoretically characterize how in-context learning abilities evolve during gradient descent training of linear attention, revealing abrupt acquisition or progressive improvements depending on how the key and query are parametrized.
Abstract: While attention-based models have demonstrated the remarkable ability of in-context learning (ICL), the theoretical understanding of how these models acquired this ability through gradient descent training is still preliminary. Towards answering this question, we study the gradient descent dynamics of multi-head linear self-attention trained for in-context linear regression. We examine two parametrizations of linear self-attention: one with the key and query weights merged as a single matrix (common in theoretical studies), and one with separate key and query matrices (closer to practical settings). For the merged parametrization, we show that the training dynamics has two fixed points and the loss trajectory exhibits a single, abrupt drop. We derive an analytical time-course solution for a certain class of datasets and initialization. For the separate parametrization, we show that the training dynamics has exponentially many fixed points and the loss exhibits saddle-to-saddle dynamics, which we reduce to scalar ordinary differential equations. During training, the model implements principal component regression in context with the number of principal components increasing over training time. Overall, we provide a theoretical description of how ICL abilities evolve during gradient descent training of linear attention, revealing abrupt acquisition or progressive improvements depending on how the key and query are parametrized.
Lay Summary: Modern AI models like large language models exhibit a remarkable ability known as in-context learning (ICL) -- they can solve unseen tasks just by seeing a few examples in the input prompt. While we observed the ICL ability in trained models, we don't really understand how they acquire it during training.
We take a step toward answering this question by theoretically analyzing a simplified version of these models, called linear attention. We show that the way these models acquire ICL depends on how they are set up: in some cases, learning happens all at once -- the model makes no progress for a long time and then suddenly acquires the ability, like a eureka moment; in other cases, learning is progressive -- the model improves step by step, steadily building up its ICL ability.
Our findings help explain the different learning curves seen when training AI models, and how model parameterization affects the way they learn.
Link To Code: https://github.com/yedizhang/linattn-icl
Primary Area: Theory->Learning Theory
Keywords: learning dynamics, in-context learning, linear attention
Submission Number: 1939
Loading