Keywords: Retrieval-augmented Generation, Safety, Poisoning Attacks
TL;DR: Poisoning and prompt-injection attacks on RAG are not stealthy, as they leave anomalies in LLM attention. We detect these outliers via attention scores and propose AV Filter, a defense that removes poisoned passages and improves robustness.
Abstract: Retrieval-augmented generation (RAG) systems are vulnerable to attacks that inject poisoned passages into the retrieved context, even at low corruption rates. We show that existing attacks are not designed to be stealthy, allowing reliable detection and mitigation. We formalize a distinguishability-based security game to quantify stealth for such attacks. If a few poisoned passages control the response, they must bias the inference process more than the benign ones, inherently compromising stealth. This motivates analyzing intermediate signals of LLMs, such as attention weights, to approximate the influence of different passages on the response. Leveraging attention weights, we introduce the **Normalized Passage Attention Score** (NPAS) and a lightweight **Attention-Variance Filter** (AV Filter) that flags anomalous passages. Our method improves robustness, yielding up to $\sim$ **20\%** higher accuracy than baseline defenses. We also develop adaptive attacks that attempt to conceal such anomalies, achieving up to **35\%** success rate and underscoring the challenges of achieving true stealth in poisoning RAG systems.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 13794
Loading