TL;DR: This paper introduces a rubric-based safety evaluation method and a high-quality benchmark to address inaccuracies in previous safety evaluations of Vision Language Models and enhance robustness against malicious exploitation.
Abstract: Current Vision Language Models (VLMs) remain vulnerable to malicious prompts that induce harmful outputs. Existing safety benchmarks for VLMs primarily rely on automated evaluation methods, but these methods struggle to detect implicit harmful content or produce inaccurate evaluations. Therefore, we found that existing benchmarks have low levels of harmfulness, ambiguous data, and limited diversity in image-text pair combinations. To address these issues, we propose the ELITE benchmark, a high-quality safety evaluation benchmark for VLMs, underpinned by our enhanced evaluation method, the ELITE evaluator. The ELITE evaluator explicitly incorporates a toxicity score to accurately assess harmfulness in multimodal contexts, where VLMs often provide specific, convincing, but unharmful descriptions of images. We filter out ambiguous and low-quality image-text pairs from existing benchmarks using the ELITE evaluator and generate diverse combinations of safe and unsafe image-text pairs. Our experiments demonstrate that the ELITE evaluator achieves superior alignment with human evaluations compared to prior automated methods, and the ELITE benchmark offers enhanced benchmark quality and diversity. By introducing ELITE, we pave the way for safer, more robust VLMs, contributing essential tools for evaluating and mitigating safety risks in real-world applications.
Lay Summary: Vision-Language Models (VLMs) are AI systems that process both images and text, but they remain vulnerable to harmful prompts that can cause unsafe outputs. Current safety benchmarks rely heavily on automated evaluation methods, but these methods often miss subtle risks or misjudge responses. As a result, many existing benchmarks contain vague or low-quality data and fail to capture the full range of harmful behaviors.
To address these issues, we introduce the ELITE benchmark, a new safety benchmark designed to more accurately evaluate harmful responses in VLMs. It is built using the ELITE evaluator, which adds a toxicity score to better detect harmful responses. This allows us to remove ambiguous image-text pairs and include more diverse and meaningful image-text combinations.
Our experiments show that the ELITE evaluator aligns more closely with human judgment than previous methods. By providing a stronger benchmark and evaluator, our work supports the development of safer, more trustworthy VLMs.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Primary Area: Social Aspects->Safety
Keywords: Benchmark, Safety Evaluation, Vision Language Models
Submission Number: 4651
Loading