Keywords: Safety, Large Multimodal Reasoning Models
TL;DR: We present CoSMo-RL, a unified reinforcement learning framework that jointly improves safety, stability, and reasoning capability in large multimodal reasoning models.
Abstract: Large Multimodal Reasoning Models (LMRMs) are moving into real applications, where they must be both useful and safe. Safety is especially challenging in multimodal settings: images and text can be combined to bypass guardrails, and single-objective training can cause policy drift that yields over-refusal on benign inputs or unsafe compliance on risky ones. We present CoSMo-RL, a mixed rein-
forcement learning framework that trains reasoning-oriented LMRMs under multimodal, multitask, and multiobjective signals, and we release the resulting model, CoSMo-R1. Our approach aims to let safety and capability grow together in one stable pipeline rather than competing during alignment. In experiments, CoSMo-R1 improves safety while maintaining—and often improving—multimodal reasoning and instruction following, shows stronger robustness to multimodal jailbreaks, and reduces unnecessary refusals. The framework also transfers across backbones with consistent gains. Ablations support the design choices, indicating a simple path to advancing safety and general capability together in LMRMs.
Supplementary Material: pdf
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 3696
Loading