VLSBench: Unveiling Visual Leakage in Multimodal Safety

ACL ARR 2025 February Submission48 Authors

02 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Safety concerns of Multimodal large language models (MLLMs) have gradually become an important problem in various applications. Surprisingly, previous works indicate a counterintuitive phenomenon that using textual unlearning to align MLLMs achieves comparable safety performances with MLLMs aligned with image-text pairs. To explain such a phenomenon, we discover a $\textit{\textbf{V}isual \textbf{S}afety \textbf{I}nformation \textbf{L}eakage (\textbf{VSIL})}$ problem in existing multimodal safety benchmarks, $\textit{i.e.}$, the potentially risky content in the image has been revealed in the textual query. Thus, MLLMs can easily refuse these sensitive image-text pairs according to textual queries only, leading to \textbf{unreliable cross-modality safety evaluation of MLLMs}. We also conduct a further comparison experiment between textual alignment and multimodal alignment to highlight this drawback. To this end, we construct $\textit{multimodal \textbf{V}isual \textbf{L}eakless \textbf{S}afety \textbf{Bench} (\textbf{VLSBench})}$ with 2.2k image-text pairs through an automated data pipeline. Experimental results indicate that VLSBench poses a significant challenge to both open-source and close-source MLLMs, $\textit{e.g.}$, LLaVA, Qwen2-VL and GPT-4o. Besides, we empirically compare textual and multimodal alignment methods on VLSBench and find that textual alignment is effective enough for multimodal safety scenarios with VSIL, while multimodal alignment is preferable for safety scenarios without VSIL.
Paper Type: Long
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: Multimodal Safety Benchmark, Textual Safety Alignment, Multimodal Safety Alignment
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Submission Number: 48
Loading