Submission Type: Regular Short Paper
Submission Track: Information Retrieval and Text Mining
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Dense Retrieval, Corpus Poisoning, Adversarial Attack
TL;DR: We propose a novel attack for dense retrieval systems, in which a malicious user generates a small number of adversarial passages to mislead dense retrieval models.
Abstract: Dense retrievers have achieved state-of-the-art performance in various information retrieval tasks, but to what extent can they be safely deployed in real-world applications? In this work, we propose a novel attack for dense retrieval systems in which a malicious user generates a small number of adversarial passages by perturbing discrete tokens to maximize similarity with a provided set of training queries. When these adversarial passages are inserted into a large retrieval corpus, we show that this attack is highly effective in fooling these systems to retrieve them for queries that were not seen by the attacker. More surprisingly, these adversarial passages can directly generalize to out-of-domain queries and corpora with a high success attack rate --- for instance, we find that 50 generated passages optimized on Natural Questions can mislead >94% of questions posed in financial documents or online forums. We also benchmark and compare a range of state-of-the-art dense retrievers, both unsupervised and supervised. Although different systems exhibit varying levels of vulnerability, we show they can all be successfully attacked by injecting up to 500 passages, a small fraction compared to a retrieval corpus of millions of passages.
Submission Number: 5103
Loading