Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Interpretability, Interactivity, and Analysis of Models for NLP
Submission Track 2: Language Modeling and Analysis of Language Models
Keywords: in-context learning, label words, anchors, large language models
TL;DR: We prove that label words play an anchor role in the information flow of in-context learning, and design several methods to improve in-context learning based on this.
Abstract: In-context learning (ICL) emerges as a promising capability of large language models (LLMs) by providing them with demonstration examples to perform diverse tasks. However, the underlying mechanism of how LLMs learn from the provided context remains under-explored. In this paper, we investigate the working mechanism of ICL through an information flow lens. Our findings reveal that label words in the demonstration examples function as anchors: (1) semantic information aggregates into label word representations during the shallow computation layers' processing; (2) the consolidated information in label words serves as a reference for LLMs' final predictions. Based on these insights, we introduce an anchor re-weighting method to improve ICL performance, a demonstration compression technique to expedite inference, and an analysis framework for diagnosing ICL errors in GPT2-XL. The promising applications of our findings again validate the uncovered ICL working mechanism and pave the way for future studies.
Submission Number: 1611
Loading