Abstract: In this paper, we introduce a black-box prompt optimization method that uses an attacker LLM agent to uncover higher levels of memorization in a victim agent, compared to what is revealed by prompting the target model with the training data directly, which is the dominant approach of quantifying memorization in LLMs. We use an iterative rejection-sampling optimization process to find instruction-based prompts with two main characteristics: (1) minimal overlap with the training data to avoid presenting the solution directly to the model, and (2)maximal overlap between the victim model's output and the training data, aiming to induce the victim to spit out training data. We observe that our instruction-based prompts generate outputs with 23.7% higher overlap with training data compared to the baseline prefix-suffix measurements. We analyze our attack in two settings: a practical approach with limited access to the sequence, excluding the suffix, and to demonstrate an empirical upper-bound scenario on the power of the attack where we have full sequence access but impose a penalty to discourage direct solutions. Our findings show that (1)instruction-tuned models can expose pre-training data as much as their base-models, if not more so, (2) contexts other than the original training data can lead to leakage, and (3) using instructions proposed by other LLMs can open a new avenue of automated attacks that we should further study and explore.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: privacy, security, memorization
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 3755
Loading