From LLMs to MLLMs: Exploring the Landscape of Multimodal Jailbreaking

ACL ARR 2024 June Submission3912 Authors

16 Jun 2024 (modified: 03 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The rapid development of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has exposed vulnerabilities to various adversarial attacks. This paper provides a comprehensive overview of jailbreaking research targeting both LLMs and MLLMs, highlighting recent advancements in evaluation benchmarks, attack techniques and defense strategies. Compared to the more advanced state of unimodal jailbreaking, multimodal domain remains underexplored. We summarize the limitations and potential research directions of multimodal jailbreaking, aiming to inspire future research and further enhance the robustness and security of MLLMs.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: multimodal jailbreaking, jailbreak attack, jailbreak defense
Contribution Types: Surveys
Languages Studied: English
Submission Number: 3912
Loading