Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon

ACL ARR 2024 June Submission1748 Authors

14 Jun 2024 (modified: 08 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Memorization in language models is typically treated as a homogenous phenomenon, neglecting the specifics of the memorized data. We instead model memorization as the effect of a set of complex factors that describe each sample and relate it to the model and corpus. To build intuition around these factors, we break memorization down into a taxonomy: recitation of highly duplicated sequences, reconstruction of inherently predictable sequences, and recollection of sequences that are neither. We demonstrate the usefulness of our taxonomy by using it to construct a predictive model for memorization. By analyzing dependencies and inspecting the weights of the predictive model, we find that different factors have different influences on the likelihood of memorization depending on the taxonomic category.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: memorization, ontologies, language modelling
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 1748
Loading