Abstract: With the widespread use of LLMs, preserving privacy in user prompts has become crucial, as prompts risk exposing privacy and sensitive data to the cloud LLMs.
Conventional techniques like homomorphic encryption, secure multi-party computation, and federated learning face challenges due to heavy computational overhead and user participation demands, limiting their applicability in LLM scenarios.
In this paper, we propose PromptObfus, a novel method for desensitizing LLM prompts.
The core idea of PromptObfus is "anti-adversarial" learning, which perturbs privacy words in the prompt to obscure sensitive information while retaining the stability of model predictions.
Specifically, PromptObfus frames prompt desensitization as a masked language modeling task, replacing privacy-sensitive terms with a [MASK] token. A desensitization model is trained to generate candidate replacements for each masked position. These candidates are subsequently selected based on gradient feedback from a surrogate model, ensuring minimal disruption to the task output.
We demonstrate the effectiveness of our approach on three NLP tasks. Results show that PromptObfus effectively prevents privacy inference from remote LLMs while preserving task performance.
Our code is publicly available at https://anonymous.4open.science/r/PromptObfus-BF36/.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: security and privacy
Contribution Types: NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1691
Loading