TL;DR: We characterize the emergence of induction heads in transformer models using the synthetic task of learning markov chains in-context.
Abstract: Large language models have the ability to generate text that mimics patterns in their inputs. We introduce a simple Markov Chain (MC) sequence modeling task in order to study how this in-context learning (ICL) capability emerges.
Transformers trained on this task (ICL-MC) form statistical induction heads which compute accurate next-token probabilities given the bigram statistics of the context. During the course of training, models pass through multiple phases: after an initial stage in which predictions are uniform, they learn to sub-optimally predict using in-context single-token statistics (unigrams); then, there is a rapid phase transition to the correct in-context bigram solution.
We conduct an empirical and theoretical investigation of this multi-phase process, showing how successful learning results from the interaction between the transformer's layers, and uncovering evidence that the presence of simpler solutions delays formation of the final optimal solutions.
Style Files: I have used the style files.
Submission Number: 26
Loading