Membership Inference Attacks on In-Context Learning Recommendation

ACL ARR 2026 January Submission6743 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: In-Context Learning, Privacy and Security, Recommendation System
Abstract: Large language models (LLMs) based recommender systems (RecSys) can adapt flexibly across different domains. It uses in-context learning (ICL), i.e., prompts, including sensitive historical user-specific item interactions, to customize the recommendation functions. However, no study has examined whether such private information may be exposed by novel privacy attacks. We design several membership inference attacks (MIAs): \emph{Similarity, Memorization, Inquiry, and Poisoning attacks}, aiming to reveal whether system prompts include victims' historical interactions. We have carefully evaluated them on the latest open-source LLMs and three well-known RecSys datasets. The results confirm that the MIA threat to LLM RecSys is realistic, and that existing prompt-based defense methods may be insufficient to protect against these attacks.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: In-Context Learning, Privacy and Security, Recommendation System
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models, Data analysis
Languages Studied: English
Submission Number: 6743
Loading