Abstract: Multimodal Large Language Models (MLLMs) have become powerful and widely adopted in some practical applications.
However, recent research has revealed their vulnerability to multimodal jailbreak attacks, whereby the model can be induced to generate harmful content, leading to safety risks.
Although most MLLMs have undergone safety alignment, recent research shows that the visual modality is still vulnerable to jailbreak attacks.
In our work, we discover that by using flowcharts with partially harmful information, MLLMs can be induced to provide additional harmful details.
Based on this, we propose a jailbreak attack method based on auto-generated flowcharts, FC-Attack.
Specifically, FC-Attack first fine-tunes a pre-trained LLM to create a step-description generator based on benign datasets.
The generator is then used to produce step descriptions corresponding to a harmful query, which are transformed into flowcharts in $3$ different shapes (vertical, horizontal, and S-shaped) as visual prompts.
These flowcharts are then combined with a benign textual prompt to execute the jailbreak attack on MLLMs.
Our evaluations on Advbench show that FC-Attack attains an attack success rate of up to $96\%$ via images and up to $78\%$ via videos across multiple MLLMs.
Additionally, we investigate factors affecting the attack performance, including the number of steps and the font styles in the flowcharts.
We also find that FC-Attack can improve the jailbreak performance from $4\%$ to $28\%$ in Claude-3.5 by changing the font style.
To mitigate the attack, we explore several defenses and find that AdaShield can largely reduce the jailbreak performance but with the cost of utility drop.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: safety and alignment
Contribution Types: NLP engineering experiment, Reproduction study
Languages Studied: English
Submission Number: 5023
Loading