Low-Perplexity LLM-Generated Sequences and Where To Find Them

Published: 22 Jun 2025, Last Modified: 22 Jun 2025ACL-SRW 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: natural language processing, nlp, training data attribution, tda, membership inference, llm safety
TL;DR: We identify when low-perplexity regions match to the training data and propose explanations.
Abstract: As Large Language Models (LLMs) become increasingly widespread, understanding how specific training data shapes their outputs is crucial for transparency, accountability, privacy, and fairness. To explore how LLMs recall and replicate learned information, we introduce a systematic approach centered on analyzing low-perplexity sequences—high-probability text spans generated by the model. Our pipeline reliably extracts such long sequences across diverse topics while avoiding degeneration, then traces them back to their sources in the training data. Surprisingly, we find that a substantial portion of these low-perplexity spans cannot be mapped to the corpus. For those that do match, we analyze the types of memorization involved and present the distribution of unique documents contributing to these mappings, highlighting the extent of verbatim recall.
Archival Status: Non‑archival
Paper Length: Short Paper (up to 4 pages of content)
Submission Number: 176
Loading