Exposing Attention Glitches with Flip-Flop Language Modeling

Published: 23 Jun 2023, Last Modified: 23 Jun 2023DeployableGenerativeAIEveryoneRevisions
Keywords: Transformers, language models, hallucinations
TL;DR: Transformers fail to robustly keep track of a single bit of memory. The glitches are surprisingly subtle and persistent. We hypothesize that this accounts for some "closed-domain hallucinations".
Abstract: Why do large language models hallucinate? This work identifies and analyzes the phenomenon of \emph{attention glitches}, in which the Transformer architecture's inductive biases intermittently fail to capture robust reasoning. To isolate the issue, we introduce \emph{flip-flop language modeling} (FFLM), a parametric family of synthetic benchmarks designed to probe the extrapolation of language models. This simple generative task requires a model to copy binary symbols over long-range dependencies, ignoring the tokens in between. We find that Transformer FFLMs suffer from a long tail of sporadic reasoning errors, some of which we can eliminate using various regularization techniques. Our preliminary mechanistic analyses show why the remaining errors may be very difficult to diagnose and resolve. We hypothesize that attention glitches account for (some of) the closed-domain hallucinations in natural LLMs.
Submission Number: 46
Loading