reteLLMe: Design Rules for Using Large Language Models to Protect the Privacy of Individuals in Their Textual Contributions

Published: 2024, Last Modified: 28 Jan 2026ESORICS Workshops (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The advanced inference capabilities of Large Language Models (LLMs) pose a significant threat to the privacy of individuals by enabling third parties to accurately infer certain personal attributes (such as gender, age, location, religion, and political opinions) from their writings. Paradoxically, LLMs can also be used to protect individuals by helping them to modify their textual output from certain unwanted inferences, opening the way to new tools. Examples include sanitising online reviews (e.g., of hotels, movies), or sanitising CVs and cover letters. However, how can we avoid miss estimating the risks of inference for LLM-based text sanitisers? Can the protection offered be overestimated? Is the original purpose of the produced text preserved?
Loading