Jailbreaking Black Box Large Language Models in Twenty Queries

Published: 01 Nov 2023, Last Modified: 12 Dec 2023R0-FoMo PosterEveryoneRevisionsBibTeX
Keywords: large language models, LLM, jailbreaking, red teaming
TL;DR: We propose a systematic method to generate semantically meaningful prompts for black box large language models often in twenty queries.
Abstract: There is growing research interest in ensuring that large language models align with human safety and ethical guidelines. Adversarial attacks known as 'jailbreaks' pose a significant threat as they coax models into overriding alignment safeguards. Identifying these vulnerabilities through attacking a language model (red teaming) is instrumental in understanding inherent weaknesses and preventing misuse. We present Prompt Automatic Iterative Refinement (PAIR), which generates semantic jailbreaks with only black-box access to a language model. Empirically, PAIR often requires fewer than 20 queries, orders of magnitude fewer than prior jailbreak attacks. PAIR draws inspiration from the human process of social engineering, and employs an attacker language model to automatically generate adversarial prompts in place of a human. The attacker model uses the target model's response as additional context to iteratively refine the adversarial prompt. PAIR achieves competitive jailbreaking success rates and transferability on open and closed-source language models, including GPT-3.5/4, Vicuna, and PaLM.
Submission Number: 65
Loading