Defending LVLMs Against Vision Attacks Through Partial-Perception Supervision

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent studies have raised significant concerns regarding the vulnerability of Large Vision Language Models (LVLMs) to maliciously injected or perturbed input images, which can mislead their responses. Existing defense methods show that such vision attacks are sensitive to image modifications especially cropping, using majority voting across responses of modified images as corrected responses. However, these modifications often result in partial images and distort the semantics, which reduces response quality on clean images after voting. Instead of directly using responses from partial images for voting, we investigate using them to supervise (guide) the LVLM's responses to the original images at inference time. We propose a black-box, training-free method called **DPS (Defense through Partial-Perception Supervision)**. In this approach, the model is prompted using the responses generated by a model that perceives only a partial image. With DPS, the model can adjust its response based on partial image understanding when under attack, while confidently maintaining its original response for clean input. Empirical experiments show our method outperforms the baseline, cutting the average attack success rate by 76.3\% across six datasets on three popular models.
Lay Summary: Large Vision-Language Models (LVLMs), such as GPT-4, Gemini 1.5 Flash, and Qwen-VL, can recognize and understand both images and text. However, recent studies show that these models can be fooled by attackers who subtly modify input images, causing them to produce incorrect answers (misleading attacks) or bypass safety mechanisms (jailbreak attacks). Current defenses often rely on voting across answers from cropped or modified versions of the image, but this can hurt performance on normal, unmodified inputs. We propose a new method called DPS (Defense through Partial-Perception Supervision), which instead uses the responses from cropped images to guide the model’s response on the original image. Experiments show that DPS performs better than existing methods against both misleading and jailbreak attacks, while preserving the model’s normal accuracy. DPS is a practical defense that works without changing or retraining the model, doesn’t rely on knowing the attack strategy, and can be easily combined with other defenses.
Link To Code: https://github.com/tools-only/DPS
Primary Area: Deep Learning->Large Language Models
Keywords: large vision language models, safety and security, vision question answering
Submission Number: 10717
Loading