everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Large Language Models can generate harmful content when prompted with carefully crafted inputs, a vulnerability known as LLM jailbreaking. As LLMs become more powerful, studying jailbreaking becomes a critical aspect of enhancing security and human value alignment. Currently, jailbreak is usually implemented by adding suffixes or using prompt templates, which suffers from low attack diversity. Inspired by diffusion models, this paper introduces the DiffusionAttacker, an end-to-end generative method for jailbreak rewriting. Our approach employs a seq2seq text diffusion model as a generator, conditioning on the original prompt and guiding the denoising process with a novel attack loss. This method preserves the semantic content of the original prompt while producing harmful content. Additionally, we leverage the Gumbel-Softmax technique to make the sampling process from the output distribution of the diffusion model differentiable, thereby eliminating the need for an iterative token search. Through extensive experiments on the Advbench and Harmbench, we show that DiffusionAttacker outperforms previous methods in various evaluation indicators including attack success rate (ASR), fluency, and diversity.