Massive Activations in Large Language Models

Published: 04 Mar 2024, Last Modified: 02 Apr 2024ME-FoMo 2024 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models; Self-Attention;
TL;DR: Massive activations function as important biases in LLMs and they are closely connected to the self-attention mechanism.
Abstract: We observe an empirical phenomenon in Large Language Models (LLMs)—very few activations exhibit significantly larger values than others (e.g., 100,000 times larger). We call them massive activations. First, we demonstrate the widespread existence of massive activations across various LLMs and characterize their locations. Second, we find their values largely stay constant regardless of the input, and they function as indispensable bias terms in LLMs. Third, these massive activations lead to the concentration of attention probabilities to their correspond- ing tokens, and further, implicit bias terms in the self-attention output. Last, we also study massive activations in Vision Transformers.
Submission Number: 59
Loading