Exploring Memorization in Fine-tuned Language Models

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Large language models, Memorization, Privacy
Abstract: Large language models have shown great capabilities in various tasks but also exhibited memorization of training data, thus raising tremendous privacy and copyright concerns. While prior work has studied memorization during pre-training, the exploration of memorization during fine-tuning is rather limited. Compared with pre-training, fine-tuning typically involves sensitive data and diverse objectives, thus may bring unique memorization behaviors and distinct privacy risks. In this work, we conduct the first comprehensive analysis to explore LLM memorization during fine-tuning. Our studies with open-sourced and our own fine-tuned LM models across various tasks indicate that fine-tuned memorization presents a strong disparity among tasks. We provide an understanding on this task disparity via sparse coding theory and unveil a strong correlation between memorization and model attention distribution. By investigating its memorization behavior, multi-task fine-tuning paves a potential strategy to mitigate fine-tuning memorization.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3829
Loading