DeRAG: Black-box Adversarial Attacks on Multiple Retrieval-Augmented Generation Applications via Prompt Injection
Submission Type: Long
Keywords: Prompt injection, Differential evolution, RAG attack
Abstract: Adversarial prompt attacks can significantly alter the reliability of Retrieval-Augmented Generation (RAG) systems by re-ranking them to produce incorrect outputs. In this paper, we present a novel method that applies Differential Evolution (DE) to optimize adversarial prompt suffixes for RAG-based question answering. Our approach is gradient-free, treating the RAG pipeline as a black box and evolving apopulationofcandidatesuffixestomaximizetheretrieval rank of a targeted incorrect document to be closer to real world scenarios. We conducted experiments on the BEIR QA datasets to evaluate attack success at certain retrieval rank thresholds under multiple retrieving applications. Our results demonstrate that DE based prompt optimization attains competitive (and in some cases higher) success rates compared to GGPP to dense retrievers and PRADA to sparse retrievers, while using only a small number of tokens (≤ 5tokens) intheadversarial suffix. Furthermore, we intro duce a readability-aware suffix construction strategy, validated by a statistically significant reduction in MLM negative log-likelihood with Welch’s 𝑡-test. Through evaluations with a BERT-based ad versarial suffix detector, we show that DE-generated suffixes evade detection, yielding near-chance detection accuracy.
Submission Number: 23
Loading