Keywords: in-context learning, tranformers, metalearning
TL;DR: We show that in-context learning can be explained by the transformer block implicitly transforming the prompt context into a low-rank weight update of its MLP layer, uncovering an implicit learning dynamic at inference time.
Abstract: One of the most striking features of Large Language Models (LLMs) is their ability to learn in-context. Namely at inference time an LLM is able to learn new patterns without any additional weight update when these patterns are presented in the form of examples in the prompt, even if these patterns were not seen during training. The mechanisms through which this can happen are still largely unknown. In this work, we show that the stacking of a self-attention layer with an MLP, allows the transformer block to implicitly modify the weights of the MLP layer according to the context. We argue through theory and experimentation that this simple mechanism may be the reason why LLMs can learn in-context and not only during training. Specifically, we show how a transformer block implicitly transforms a context into a low-rank weight-update of its MLP layer.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 14627
Loading