In-Context Learning with Differentially Private Text Sanitization in Large Language Models

Published: 01 Jan 2024, Last Modified: 25 Jul 2025DSPP (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the increasing popularity of cloud-based large language models, users often inadvertently input text containing personal information during interactions, leading to significant privacy concerns. To address this challenge, we propose an in-context learning (ICL) based on differential privacy (DP) to protect users’ instances and context information formally. The core idea entails enhancing local differentially private text sanitization and using token mapping relationships to remap private responses effectively. We conduct the experimental evaluation by comparing our method with Custext, two-shot, and zero-shot. The findings indicate that our method can attain a competitive edge while maintaining robust privacy protections. Code is available at https://github.com/larryfans/ICL_DP_Text-Sanitization.
Loading