E$^2$AT: Multimodal Jailbreak Defense via Dynamic Joint Optimization

ICLR 2026 Conference Submission18789 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Jailbreak attack, Dynamic Joint Optimization, Multimodal Large Language Models
Abstract: Research endeavors have been made in learning robust Multimodal Large Language Models (MLLMs) against jailbreak attacks. However, existing methods for improving MLLMs' robustness still face critical challenges: ① how to efficiently tune massive weight parameters and ② how to ensure robustness against attacks across both visual and textual modalities. To this end, we propose an $\textbf{E}$fficient $\textbf{E}$nd-to-end $\textbf{A}$dversarial $\textbf{T}$raining (E$^2$AT) framework for both visual and textual adversarial attacks. Specifically, for the visual aspect, E$^2$AT incorporates an efficient projector-based AT module that aligns the attack samples at the feature level. For training objectives, we propose a Dynamic Joint Multimodal Optimization (DJMO) strategy to enhance generalization ability against jailbreak attacks by dynamically adjusting weights between normal and adversarial objectives. Extensive experiments are conducted with five major jailbreak attack methods across three mainstream MLLMs. Results demonstrate that our E$^2$AT achieves the state-of-the-art performance, outperforming existing baselines by an average margin of 34\% across text and image modalities, while maintaining clean task performance. Furthermore, evaluations of real-world embodied intelligent systems highlight the practical applicability of E$^2$AT, paving the way for the development of more secure and reliable multimodal systems. Our code is available on [https://anonymous.4open.science/r/EAT-FC71](https://anonymous.4open.science/r/EAT-FC71).
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 18789
Loading