Abstract: With the widespread use of LLMs, preserving privacy in user prompts has become crucial, as prompts risk exposing privacy and sensitive data to the cloud LLMs.
Traditional techniques like homomorphic encryption, secure multi-party computation, and federated learning face challenges due to heavy computational costs and user participation requirements, limiting their applicability in LLM scenarios.
In this paper, we propose PromptObfus, a novel method for desensitizing LLM prompts.
The core idea of PromptObfus is "anti-adversarial" learning, which perturbs privacy words in the prompt to obscure sensitive information while retaining the stability of model predictions. This strategy enables LLMs to achieve an optimal balance between robust privacy protection and task performance. The pipeline involves three key steps: predicting desensitized alternatives, assessing task utility, and selecting replacements that minimize performance degradation.
We demonstrate the effectiveness of our approach on three NLP tasks including sentiment classification, topic classification, and question answering. Results show that PromptObfus effectively prevents privacy inference from remote LLMs while preserving task performance.
The code is available at https://anonymous.4open.science/r/PromptObfus-83F7/.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: security and privacy
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 6622
Loading