Do LLMs Really Memorize Personally Identifiable Information? Revisiting PII Leakage with a Cue-Controlled Memorization Framework

ACL ARR 2026 January Submission7030 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Memorization, Multilingual LLMs, Privacy Leakage
Abstract: Large Language Models (LLMs) have been reported to “leak” Personally Identifiable Information (PII), with successful PII reconstruction often interpreted as evidence of memorization. We propose a principled revision of memorization evaluation for LLMs, arguing that PII leakage should be evaluated under low lexical cue conditions, where target PII cannot be reconstructed through prompt-induced generalization or pattern completion. We formalize Cue-Resistant Memorization (CRM) as a cue-controlled evaluation framework and a necessary condition for valid memorization evaluation, explicitly conditioning on prompt-target overlap cues. Using CRM, we conduct a large-scale multilingual re-evaluation of PII leakage across 32 languages and multiple memorization paradigms. Revisiting reconstruction-based settings, including verbatim prefix-suffix completion and associative reconstruction, we find that their apparent effectiveness is driven primarily by direct surface-form cues rather than by true memorization. When such cues are controlled for, reconstruction success diminishes substantially. We further examine cue-free generation and membership inference, both of which exhibit extremely low true positive rates. Overall, our results suggest that previously reported PII leakage is better explained by cue-driven behavior than by genuine memorization, highlighting the importance of cue-controlled evaluation for reliably quantifying privacy-relevant memorization in LLMs.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: multilingualism, multilingual benchmarks, multilingual evaluation, less-resourced languages, robustness, security and privacy
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: Afrikaans, Arabic, Azerbaijani, Belarusian, Bulgarian, Danish, German, Greek, English, Spanish, Finnish, French, Hindi, Hungarian, Italian, Korean, Lithuanian, Latvian, Malayalam, Dutch, Polish, Portuguese, Romanian, Russian, Swedish, Swahili, Tamil, Thai, Turkish, Ukrainian, Vietnamese, Chinese.
Submission Number: 7030
Loading