VMDT: Decoding the Trustworthiness of Video Foundation Models

Published: 18 Sept 2025, Last Modified: 30 Oct 2025NeurIPS 2025 Datasets and Benchmarks Track posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Video foundation model, Trustworthiness, Safety, Hallucination, Fairness, Privacy, Robustness
TL;DR: This paper introduces the first unified platform for evaluating text-to-video and video-to-text models across five key dimensions: safety, hallucination, fairness, privacy, and adversarial robustness.
Abstract: As foundation models become more sophisticated, ensuring their trustworthiness becomes increasingly critical; yet, unlike text and image, the video modality still lacks comprehensive trustworthiness benchmarks. We introduce VMDT (Video-Modal DecodingTrust), the first unified platform for evaluating text-to-video (T2V) and video-to-text (V2T) models across five key trustworthiness dimensions: safety, hallucination, fairness, privacy, and adversarial robustness. Through our extensive evaluation of 7 T2V models and 19 V2T models using VMDT, we uncover several significant insights. For instance, all open-source T2V models evaluated fail to recognize harmful queries and often generate harmful videos, while exhibiting higher levels of unfairness compared to image modality models. In V2T models, unfairness and privacy risks rise with scale, whereas hallucination and adversarial robustness improve---though overall performance remains low. Uniquely, safety shows no correlation with model size, implying that factors other than scale govern current safety levels. Our findings highlight the urgent need for developing more robust and trustworthy video foundation models, and VMDT provides a systematic framework for measuring and tracking progress toward this goal. The code is available at https://sunblaze-ucb.github.io/VMDT-page/.
Croissant File: zip
Dataset URL: https://huggingface.co/datasets/mmfm-trust/V2T
Code URL: https://github.com/sunblaze-ucb/VMDT
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 1970
Loading