Blink of an eye: a simple theory for feature localization in generative models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A simple, general, and unifying theory for feature localization in language and diffusion models
Abstract: Large language models can exhibit unexpected behavior in the blink of an eye. In a recent computer use demo, a language model switched from coding to Googling pictures of Yellowstone, and these sudden shifts in behavior have also been observed in reasoning patterns and jailbreaks. This phenomenon is not unique to autoregressive models: in diffusion models, key features of the final output are decided in narrow ``critical windows'' of the generation process. In this work we develop a simple, unifying theory to explain this phenomenon. Using the formalism of stochastic localization for generative models, we show that it emerges generically as the generation process localizes to a sub-population of the distribution it models. While critical windows have been studied at length in diffusion models, existing theory heavily relies on strong distributional assumptions and the particulars of Gaussian diffusion. In contrast to existing work our theory (1) applies to autoregressive and diffusion models; (2) makes very few distributional assumptions; (3) quantitatively improves previous bounds even when specialized to diffusions; and (4) requires basic mathematical tools. Finally, we validate our predictions empirically for LLMs and find that critical windows often coincide with failures in problem solving for various math and reasoning benchmarks.
Lay Summary: Language models can change their behavior rapidly; for example, a language model might suddenly switch from coding to browsing information about national parks. These rapid changes, called "critical windows," have also been observed for language models solving math problems and hacks eliciting dangerous information from language models. Surprisingly, these critical windows have also been observed in image and video generation models as well. In this work, we apply the mathematical formalism of stochastic localization to develop a simple, general, and unifying theory that explains this phenomenon across all generative models. We show that this occurs whenever the generative model specializes to a sub-population of the distribution it models. Our research has implications for the safety and reasoning capabilities of language models and could inspire new methods to make language models more robust to these types of failures.
Link To Code: https://github.com/marvinli-harvard/critical-windows-lm
Primary Area: Theory->Probabilistic Methods
Keywords: stochastic localization, theory of diffusion, large language models, interpretability, reasoning
Submission Number: 1904
Loading