Keywords: Contextual Integrity; Inference-time Privacy; Input-output flow
TL;DR: CIMemories is a dataset of synthetic user profiles paired with recipient–task contexts that simulates persistent,cross-session LLM “memory” to evaluate whether models use long-term context appropriately—sharing what’s needed while avoiding leaks.
Abstract: Large Language Models (LLMs) increasingly use persistent memory from past interactions to enhance personalization and task performance. However, this memory introduces critical risks when sensitive information is revealed in inappropriate contexts. We present CIMemories, a benchmark for evaluating whether LLMs appropriately control information flow from memory based on task context. CIMemories uses synthetic user profiles with over 100 attributes per user, paired with diverse task contexts in which each attribute may be essential for some tasks but inappropriate for others. Our evaluation reveals that frontier models exhibit up to 69% attribute-level violations (leaking information inappropriately), with lower violation rates often coming at the cost of task utility. Violations accumulate across both tasks and runs: as usage increases from 1 to 40 tasks, GPT-5’s violations rise from 0.1% to 9.6%, reaching 25.1% when the same prompt is executed 5 times, revealing arbitrary and unstable behavior in which models leak different attributes for identical prompts. Privacy-conscious prompting does not solve this—models overgeneralize, sharing everything or nothing rather than making nuanced, context-dependent decisions. These findings reveal fundamental limitations that require contextually aware reasoning capabilities, not just better prompting or scaling. Code is available at https://github.com/facebookresearch/CIMemories.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 22216
Loading