Position: Contextual Integrity is Inadequately Applied to Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: This position paper argues that existing literature adopts CI for LLMs without embracing the theory’s fundamental tenets, essentially amounting to a form of ``CI-washing.''
Abstract: Machine learning community is discovering Contextual Integrity (CI) as a useful framework to assess the privacy implications of large language models (LLMs). This is an encouraging development. The CI theory emphasizes sharing information in accordance with *privacy norms* and can bridge the social, legal, political, and technical aspects essential for evaluating privacy in LLMs. However, this is also a good point to reflect on use of CI for LLMs. *This position paper argues that existing literature inadequately applies CI for LLMs without embracing the theory’s fundamental tenets.* Inadequate applications of CI could lead to incorrect conclusions and flawed privacy-preserving designs. We clarify the four fundamental tenets of CI theory, systematize prior work on whether they deviate from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity, positional bias).
Lay Summary: The growing use of large language models (LLMs) to automate tasks across and within social contexts (e.g., workplaces, households, health, and education) raises questions about the privacy implications of these models and how best to evaluate them. The machine learning community is exploring the use of Contextual Integrity (CI) as a useful framework for assessing the privacy implications of LLMs. CI theory emphasizes sharing information in accordance with privacy norms and can help bridge the social, legal, political, and technical aspects essential to evaluating privacy in LLMs. However, this is also an opportunity to reflect on the use of CI for LLMs. Despite the apparent simplicity of the CI framework, its application is far from trivial. A rote use of the framework—while potentially insightful—does not deepen our understanding of LLMs' privacy implications. To meaningfully operationalize CI theory, we must support its four essential tenets. This position paper argues that existing literature inadequately applies CI to LLMs by failing to fully embrace these core tenets. We clarify the four fundamental tenets of CI theory, systematize prior work to examine whether it deviates from these tenets, and highlight overlooked issues in experimental hygiene for LLMs (e.g., prompt sensitivity and positional bias).
Primary Area: Social, Ethical, and Environmental Impacts
Keywords: contextual integrity, privacy, large language models
Submission Number: 94
Loading