Keywords: Video Large Language Models, Data poisoning, Prompt-guided sampling
TL;DR: We propose PoisonVID, the first black-box poisoning attack against the advanced prompt-guided sampling in VideoLLMs.
Abstract: Video Large Language Models (VideoLLMs) have emerged as powerful tools for understanding videos, supporting tasks such as summarization, captioning, and question answering. Their performance has been driven by advances in frame sampling, progressing from uniform-based to semantic-similarity-based and, most recently, prompt-guided strategies. While vulnerabilities have been identified in earlier sampling strategies, the safety of prompt-guided sampling remains unexplored. We close this gap by presenting PoisonVID, the first black-box poisoning attack that undermines prompt-guided sampling in VideoLLMs. PoisonVID compromises the underlying prompt-guided sampling mechanism through a closed-loop optimization strategy that iteratively optimizes a universal perturbation to suppress harmful frame relevance scores, guided by a depiction set constructed from paraphrased harmful descriptions leveraging a shadow VideoLLM and a lightweight language model, i.e., GPT-4o-mini. Comprehensively evaluated on three prompt-guided sampling strategies and across three advanced VideoLLMs, PoisonVID achieves \(82\% - 99\%\) attack success rate, highlighting the importance of developing future advanced sampling strategies for VideoLLMs.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 12529
Loading