Error-Aware Generative Reasoning for Zero-Shot Visual Grounding

Published: 2025, Last Modified: 05 Jan 2026IEEE Trans. Multim. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Zero-shot visual grounding is the task of identifying and localizing an object in an image based on a referring expression without task-specific training. Existing methods employ heuristic rules to step-by-step perform visual perception for visual grounding. Despite their remarkable performance, there are still two limitations. First, such a rule-based manner struggles with expressions that are not covered by predefined rules. Second, existing methods lack a mechanism for identifying and correcting visual perceptual errors of incomplete information, resulting in cascading errors caused by reasoning based on incomplete visual perception results. In this article, we propose an Error-Aware Generative Reasoning (EAGR) method for zero-shot visual grounding. To address the limited adaptability of existing methods, a reasoning chain generator is presented, which prompts LLMs to dynamically generate reasoning chains for specific referring expressions. This generative manner eliminates the reliance on human-written heuristic rules. To mitigate visual perceptual errors of incomplete information, an error-aware mechanism is presented to elicit LLMs to identify these errors and explore correction strategies. Experimental results on four benchmarks show that EAGR outperforms state-of-the-art zero-shot methods by up to 10% and an average of 7%.
Loading