How much can language models memorize?

OpenReview Anonymous Preprint Submission659 Authors

18 Feb 2025Anonymous Preprint SubmissionEveryoneCC BY 4.0
Keywords: memorization, transformers, LLMs, language models, pretraining, information, capacity
Abstract: Due to the inherent structure of language, prior studies of language model memorization have struggled to disentangle memorization from generalization. We formally separate memorization into two components: unintended memorization, the information a model contains about a specific dataset, and generalization, the information a model contains about the true data-generation process. Our framework allows us to cleanly separate memorization and generalization in a variety of settings. When we completely eliminate generalization, we can compute the exact capacity of language models; our measurements estimate that GPT-style models have a capacity of approximately 3.6 bits per parameter. We train language models on datasets of increasing size and observe that models memorize via unintended memorization until their capacity fills, at which point memorization decreases as models begin to generalize. We train hundreds of transformer language models ranging from 500K to 1.5B parameters and produce a series of scaling laws relating model capacity and data size to membership inference.
Submission Number: 659
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview