Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models

Published: 10 Jul 2024, Last Modified: 26 Aug 2024COLMEveryoneRevisionsBibTeXCC BY 4.0
Research Area: Evaluation, Safety, Inference algorithms for LMs
Keywords: Adversarial Robustness, Adversarial Attack and Defences, Retrieval-based LLM, In-context Learning, Evaluation
TL;DR: Performed a systematic study on the adversarial robustness of variants of ICL across different models; and proposed a new adversarial defence methods
Abstract: With the emergence of large language models, such as LLaMA and OpenAI GPT-3, In-Context Learning (ICL) gained significant attention due to its effectiveness and efficiency. However, ICL is very sensitive to the choice, order, and verbaliser used to encode the demonstrations in the prompt. \emph{Retrieval-Augmented ICL} methods try to address this problem by leveraging retrievers to extract semantically related examples as demonstrations. While this approach yields more accurate results, its robustness against various types of adversarial attacks, including perturbations on test samples, demonstrations, and retrieved data, remains under-explored. Our study reveals that retrieval-augmented models can enhance robustness against test sample attacks, outperforming vanilla ICL with a 4.87\% reduction in Attack Success Rate (ASR); however, they exhibit overconfidence in the demonstrations, leading to a 2\% increase in ASR for demonstration attacks. Adversarial training can help improve the robustness of ICL methods to adversarial attacks; however, such a training scheme can be too costly in the context of LLMs. As an alternative, we introduce an effective training-free adversarial defence method, \emph{DARD}, which enriches the example pool with those attacked samples. We show that DARD yields improvements in performance and robustness, achieving a 15\% reduction in ASR over the baselines. Code and data are available jointly with this submission as supplementary material.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the COLM Code of Ethics on https://colmweb.org/CoE.html
Author Guide: I certify that this submission complies with the submission instructions as described on https://colmweb.org/AuthorGuide.html
Submission Number: 1170
Loading