Abstract: In the rapidly evolving landscape of artificial intelligence, generative models such as Generative Adversarial Networks (GANs) and Diffusion Models have become cornerstone technologies, driving innovation in diverse fields from art creation to healthcare. Despite their potential, these models face the significant challenge of data memorization, which poses risks to privacy and the integrity of generated content. Among various metrics of memorization detection, our study delves into the memorization scores calculated from encoder layer embeddings, which involves measuring distances between samples in the embedding spaces. Particularly, we find that the memorization scores calculated from layer embeddings of Vision Transformers (ViTs) show an notable trend - the latter (deeper) the layer, the less the memorization measured. It has been found that the memorization scores from the early layers' embeddings are more sensitive to low-level memorization (e.g. colors and simple patterns for an image), while those from the latter layers are more sensitive to high-level memorization (e.g. semantic meaning of an image). We also observe that, for a specific model architecture, its degree of memorization on different levels of information is unique. It can be viewed as an inherent property of the architecture. Building upon this insight, we introduce a unique fingerprinting methodology. This method capitalizes on the unique distributions of the memorization score across different layers of ViTs, providing a novel approach to identifying models involved in generating deepfakes and malicious content. Our approach demonstrates a marked 30% enhancement in identification accuracy over existing baseline methods, offering a more effective tool for combating digital misinformation.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: ### Main text
- Expanded analysis in Section 4.2, adding explanations for CT-scores on curated datasets.
- Revised Figures 1 and 2: Adjusted y-axis ranges for consistency across models.
- Clarified the drop in CT-score in the last layer for Figures 1 and 2.
- Clarified the primary objective of utilizing CT-score.
- Provided more detailed experiment motif and result analysis of Section 4.3.
- Revised Related Work section to include deeper discussion with prior research.
- Clarified CIFAR-10 dataset usage and computational constraints in experimental setup.
- Added a comparison between ResNet-50 and our method, explaining generalization and dataset dependence (Section 5.3).
### Appendix
- Added detailed description of encoder pretraining process and datasets (Appendix A).
Assigned Action Editor: ~Weijian_Deng1
Submission Number: 3093
Loading