Abstract: Large Language Models (LLMs) have shown their impressive capabilities, while also raising concerns about the data contamination problems due to privacy issues and leakage of benchmark datasets in the pre-training phase. Therefore, it is vital to detect the contamination by checking whether an LLM has been pre-trained on the target texts. Recent studies focus on the generated texts and compute perplexities, which are superficial features and not reliable. In this study, we propose to utilize the probing technique for pre-training data detection by examining the model's internal activations. Our method is simple and effective and leads to more trustworthy pre-training data detection. Additionally, we propose ArxivMIA, a new challenging benchmark comprising arxiv abstracts from Computer Science and Mathematics categories. Our experiments demonstrate that our method outperforms all baselines, and achieves state-of-the-art performance on both WikiMIA and ArxivMIA, with additional experiments confirming its efficacy.
Paper Type: long
Research Area: Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Preprint Status: There is no non-anonymous preprint and we do not intend to release one.
A1: yes
A1 Elaboration For Yes Or No: Section Limitations
A2: n/a
A3: yes
A3 Elaboration For Yes Or No: Abstract; Section 1
B: yes
B1: yes
B1 Elaboration For Yes Or No: Section 1/2/4/5
B2: yes
B2 Elaboration For Yes Or No: Section 1/2/4/5
B3: yes
B3 Elaboration For Yes Or No: Section 1/2/4/5
B4: yes
B4 Elaboration For Yes Or No: Section 4
B5: yes
B5 Elaboration For Yes Or No: Section 1/2/4
B6: yes
B6 Elaboration For Yes Or No: Section 4/5
C: yes
C1: yes
C1 Elaboration For Yes Or No: Section 5
C2: yes
C2 Elaboration For Yes Or No: Section 5
C3: yes
C3 Elaboration For Yes Or No: Section 6
C4: yes
C4 Elaboration For Yes Or No: Section 5
D: no
E: no
0 Replies
Loading