Learning, Forgetting, Remembering: Insights From Tracking LLM Memorization During Training

Published: 21 Sept 2024, Last Modified: 11 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Full paper
Keywords: large language models, memorization, data extraction
TL;DR: LLMs memorize more of their training data at the beginning and the very end of training.
Abstract: Large language models memorize portions of their training data verbatim. Our findings indicate that models exhibit higher memorization rates both early on and at the very end of their training, with the lowest rates occurring midway through the process. This phenomenon can be attributed to the models retaining most of the examples memorized early on, while forgetting many more examples as training progresses. Interestingly, these forgotten examples are sometimes re-memorized later on, often undergoing cycles of forgetting and re-memorization. Notably, examples memorized early in training are more likely to remain consistently retained, suggesting that they become more firmly ’crystallized’ in the model’s representation. Based on these insights, we tentatively recommend placing data that is more likely to be sensitive in the middle stages of the training process.
Copyright PDF: pdf
Submission Number: 15
Loading