Implicit Meta-Learning in Small Transformer Models: Insights from a Toy Task

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Extended abstract
Keywords: Transformers, Implicit Meta-Learning, IML
Abstract: In this work-in-progress, we investigate implicit meta-learning (IML) in transformers. IML is a phenomenon in which neural networks appear to internalise reliable-seeming information more than unreliable-seeming information during training. In particular, we demonstrate that for a particular toy task, IML occurs even in models with a single layer. We show that a model learning to do IML is associated with an increase in gradient alignment between reliable-seeming information and a different task that requires that information. We also find that there is a complex periodic structure to the embeddings of the model, which changes differently when trained on reliable-seeming information than on unreliable-seeming information. These findings contribute to our understanding of IML.
Submission Number: 26
Loading