Towards Evaluating Proactive Risk Awareness of Multimodal Language Models

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Daily Risk Detection, Proactive LM
TL;DR: We created a dataset to evaluate current models' ability to actively detect and alert risks based on the observations of user behaviors.
Abstract: Human safety awareness gaps often prevent the timely recognition of everyday risks. In solving this problem, a proactive safety artificial intelligence (AI) system would work better than a reactive one. Instead of just reacting to users' questions, it would actively watch people’s behavior and their environment to detect potential dangers in advance. Our Proactive Safety Bench (PaSBench) evaluates this capability through 416 multimodal scenarios (128 image sequences, 288 text logs) spanning 5 safety-critical domains. Evaluation of 36 advanced models reveals fundamental limitations: Top performers like Gemini-2.5-pro achieve 71\% image and 64\% text accuracy, but miss 45-55\% risks in repeated trials. Through failure analysis, we identify unstable proactive reasoning rather than knowledge deficits as the primary limitation. This work establishes (1) a proactive safety benchmark, (2) systematic evidence of model limitations, and (3) critical directions for developing reliable protective AI. We believe our dataset and findings can promote the development of safer AI assistants that actively prevent harm rather than merely respond to requests.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/Youliang/PaSBench
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 1721
Loading