Attacking Large Language Models with Projected Gradient Descent

Published: 28 Jun 2024, Last Modified: 25 Jul 2024NextGenAISafety 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Adversarial attack, Projected Gradient Descent, Large Language Models, Automatic Red Teaming, Jailbreak
TL;DR: We show how ordinary gradient-based optimization (Projected Gradient Descent) can be used to efficiently and effectively attack Large Language Models.
Abstract: Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the error introduced by the continuous relaxation tremendously boosts their efficacy. Our PGD for LLMs is up to one order of magnitude faster than state-of-the-art discrete optimization at achieving the same devastating attack results. The availability of such effective and efficient adversarial attacks is key for advancing and evaluating the alignment of LLMs.
Submission Number: 122
Loading