Layer by Layer: Uncovering Hidden Representations in Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: An investigation into the quality and characteristics of intermediate LLM layers
Abstract: From extracting features to generating text, the outputs of large language models (LLMs) typically rely on their final layers, following the conventional wisdom that earlier layers capture only low-level cues. However, our analysis shows that intermediate layers can encode even richer representations, often improving performance on a wide range of downstream tasks. To explain and quantify these hidden-layer properties, we propose a unified framework of representation quality metrics based on information theory, geometry, and invariance to input perturbations. Our framework highlights how each model layer balances information compression and signal preservation, revealing why mid-depth embeddings can exceed the last layer’s performance. Through extensive experiments on 32 text-embedding tasks across various architectures (transformers, state-space models) and domains (language, vision), we demonstrate that intermediate layers consistently provide stronger features, challenging the standard view on final-layer embeddings and opening new directions on using mid-layer representations for more robust and accurate representations.
Lay Summary: Large language models (LLMs) are made up of many layers, with each layer coming one after another. Traditionally, it's believed that the final layers are the most important because they produce the output, while earlier layers are thought to handle only simple, low-level features. However, this study finds that the middle layers often contain richer and more useful information than the final ones. We developed a new framework to measure the quality of information in each layer, using tools from information theory and geometry. After testing many models and tasks, we discovered that intermediate layers consistently provide better features for understanding text. This challenges the common assumption that only the final layers matter and suggests that tapping into middle layers could lead to more accurate and reliable AI systems.
Link To Code: https://github.com/OFSkean/information_flow
Primary Area: Deep Learning->Large Language Models
Keywords: large language model, entropy, augmentation, intermediate layer, vision transformer
Submission Number: 12891
Loading