TL;DR: We propose a detection method for indirect prompt injection attacks based on identifying reduced dependencies between agent tool calls and user inputs, achieving both security and utility.
Abstract: Recent research has explored that LLM agents are vulnerable to indirect prompt injection (IPI) attacks, where malicious tasks embedded in tool-retrieved information can redirect the agent to take unauthorized actions. Existing defenses against IPI have significant limitations: either require essential model training resources, lack effectiveness against sophisticated attacks, or harm the normal utilities. We present MELON (Masked re-Execution and TooL comparisON), a novel IPI defense. Our approach builds on the observation that under a successful attack, the agent’s next action becomes less dependent on user tasks and more on malicious tasks. Following this, we design MELON to detect attacks by re-executing the agent’s trajectory with a masked user prompt modified through a masking function. We identify an attack if the actions generated in the original and masked executions are similar. We also include three key designs to reduce the potential false positives and false negatives. Extensive evaluation on the IPI benchmark AgentDojo demonstrates that MELON outperforms SOTA defenses in both attack prevention and utility preservation. Moreover, we show that combining MELON with a SOTA prompt augmentation defense (denoted as MELON-Aug) further improves its performance. We also conduct a detailed ablation study to validate our key designs. Code is available at https://github.com/kaijiezhu11/MELON.
Lay Summary: AI assistants that can use tools like checking emails or browsing websites are becoming common, but they have a dangerous vulnerability: malicious instructions hidden in external content can trick them into performing unauthorized actions, like transferring money to scammers instead of completing your request.
Existing defenses have major flaws—they either require expensive AI retraining, block too many legitimate actions (reducing usefulness), or fail against sophisticated attacks while maintaining functionality.
We developed MELON, a defense that runs two parallel processes: one handles your original request normally, while the other removes your request but keeps retrieved information. If both try to perform the same actions, it indicates the AI is following hidden malicious instructions rather than your request.
Testing shows MELON prevents over 99% of attacks while maintaining the AI's ability to complete legitimate tasks—a major improvement over existing defenses that force users to choose between security and functionality. MELON makes AI assistants safer for everyday use without significantly reducing their helpfulness.
Link To Code: https://github.com/kaijiezhu11/MELON
Primary Area: Social Aspects->Security
Keywords: Indirect prompt injection, Agent system for tool use, Large langugae models
Flagged For Ethics Review: true
Submission Number: 8180
Loading