Extracting Paragraphs from LLM Token Activations

Published: 09 Oct 2024, Last Modified: 15 Dec 2024MINT@NeurIPS2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: transformers, interpretability, LLMs
TL;DR: We study how language models might encode paragraphs, and find newline tokens activations do this to some extent.
Abstract: Generative large language models (LLMs) excel in natural language processing tasks, yet their inner workings remain underexplored beyond token-level predictions. This study investigates the degree to which these models decide the content of a paragraph at its onset, shedding light on their contextual understanding. By examining the information encoded in single-token activations, specifically the "\n\n" double newline token, we demonstrate that patching these activations can transfer significant information about the context of the following paragraph, providing further insights into the model’s capacity to plan ahead.
Email Of Author Nominated As Reviewer: work@nicky.pro
Submission Number: 1
Loading