TL;DR: A novel method for generating human-readable adversarial prompts in seconds for attacking and red-teaming LLMs.
Abstract: Large Language Models (LLMs) are vulnerable to **jailbreaking attacks** that lead to generation of inappropriate or harmful content.
Manual red-teaming requires a time-consuming search for adversarial prompts, whereas automatic adversarial prompt generation often leads to semantically meaningless attacks that do not scale well.
In this paper, we present a novel method that uses another LLM, called **AdvPrompter**, to generate human-readable adversarial prompts in seconds.
AdvPrompter, which is trained using an alternating optimization algorithm, generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response.
Experimental results on popular open source TargetLLM show highly competitive results on the AdvBench and HarmBench datasets, that also transfer to closed-source black-box LLMs.
We also show that training on adversarial suffixes generated by AdvPrompter is a promising strategy for improving the robustness of LLMs to jailbreaking attacks.
Lay Summary: Modern chatbots are built with “guardrails” that should stop them from giving dangerous or hateful answers. Yet with the right wording, people can still slip past these safeguards—a practice known as “jailbreaking.” Finding such tricks by hand is slow, and existing automated searches often spit out nonsense that real users would never type.
Our study introduces AdvPrompter, a second chatbot that acts like a mischievous sparring partner. Given any user question, it invents a short, understandable add-on phrase that hides the request from the guardrails while keeping the meaning intact. In seconds, AdvPrompter uncovers jailbreaks that work against popular open-source models and even some commercial black-box systems.
We then turn the tables: by retraining the original chatbot on the failures exposed by AdvPrompter, we make it noticeably harder to fool. Because the tool is open-sourced, companies and researchers can use it to stress-test their own systems and, ultimately, build safer and more trustworthy AI.
Link To Code: https://github.com/facebookresearch/advprompter
Primary Area: Social Aspects->Security
Keywords: adversarial attacks, prompt optimization, red-teaming LLMs
Submission Number: 12977
Loading