Outliers Memorized Last: Trends in Memorization of Diffusion Models Based on Training Distribution and Epoch

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Diffusion Models, Generative AI, Memorization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: Diffusion models in general from a training dataset memorize 'inliers' at an earlier training epoch while memorizing 'outliers' at later epochs.
Abstract: Memorization and replication of training data in diffusion models like Stable Diffusion is a poorly understood phenomenon with a number of privacy and legal issues tied to it. This paper analyzes how the location of a data point in the training dataset's distribution affects its likelihood of memorization over training epochs. Importantly, it finds that memorization of 'outliers' is less likely early in the training process until eventually matching with the rest of the dataset. It then suggests applications utilizing this difference in memorization rate, including hyperparameter tuning and anomaly detection. It then suggests research that could be done from this conclusion to further improve memorization understanding.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8934
Loading