ProxyPrompt: Securing System Prompts against Prompt Extraction Attacks

ICLR 2026 Conference Submission17044 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: prompt extraction, prompt security, large language models
TL;DR: ProxyPrompt is a defense method to secure system prompts against extraction attacks, maintaining the original task's utility while obfuscating the extracted prompt.
Abstract: The integration of large language models (LLMs) into a wide range of applications has highlighted the critical role of well-crafted system prompts, which require extensive testing and domain expertise. These prompts enhance task performance but may also encode sensitive information and filtering criteria, posing security risks if exposed. Recent research shows that system prompts are vulnerable to extraction attacks, while existing defenses are either easily bypassed or require constant updates to address new threats. In this work, we introduce ProxyPrompt, a novel defense mechanism that prevents prompt leakage by replacing the original prompt with a proxy. This proxy maintains the original task's utility while obfuscating the extracted prompt, ensuring attackers cannot reproduce the task or access sensitive information. Comprehensive evaluations on 264 LLM and system prompt pairs show that ProxyPrompt protects 94.70% of prompts from extraction attacks, outperforming the next-best defense, which only achieves 42.80%. The code will be open-sourced upon acceptance.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17044
Loading