Empirical Analysis of Large Vision-Language Models against Goal Hijacking via Visual Prompt Injection

Published: 01 Jan 2024, Last Modified: 30 Jun 2025CoRR 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We explore visual prompt injection (VPI) that maliciously exploits the ability of large vision-language models (LVLMs) to follow instructions drawn onto the input image. We propose a new VPI method, "goal hijacking via visual prompt injection" (GHVPI), that swaps the execution task of LVLMs from an original task to an alternative task designated by an attacker. The quantitative analysis indicates that GPT-4V is vulnerable to the GHVPI and demonstrates a notable attack success rate of 15.8%, which is an unignorable security risk. Our analysis also shows that successful GHVPI requires high character recognition capability and instruction-following ability in LVLMs.
Loading