Maximizing MLLM Visual Jailbreak Robustness via Dynamic Stylistic Reinforcement and GRPO Meta-Optimization
Abstract: Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language understanding, yet remain susceptible to jailbreak attacks that exploit non-content-based weaknesses. Building upon the discovery of Stylistic Inconsistency between MLLM comprehension and safety behavior, we introduce Dynamic Stylistic Reinforcement (DSR) — a meta-optimization framework that adaptively learns stylistic transformations to enhance jailbreak performance. DSR integrates a Group Relative Policy Optimization (GRPO) agent guided by a Composite Reward Function, combining logit-derived refusal signals with a semantic preservation metric from a high-capacity judge model. The system dynamically fine-tunes an image-editing module to superimpose optimal stylistic layers across diverse attack conditions. Empirical evaluations on commercial and open-source MLLMs demonstrate that DSR consistently increases Attack Success Rate (ASR) while maintaining high visual fidelity, confirming that adaptive style reinforcement substantially magnifies the potency and generalizability of adversarial visual triggers for multimodal jailbreak scenarios.
Loading