RSVP

Yi Lu, Jiawang Cao, Yongliang Wu, Bozheng Li, Licheng Tang, Yangguang Ji, Chong Wu, Jay Wu, Wenbo Zhu

Published: 01 Jul 2025, Last Modified: 30 Dec 2025Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Multi-modal Large Language Models (MLLMs) have demonstrated remarkable reasoning capability while lacking explicit mechanisms for visual grounding and segmentation, creating a gap between cognitive reasoning and visual perception. To bridge this gap, we introduce Reasoning Segmentation via Visual Prompting (RSVP), a novel framework that unifies multi-step multimodal reasoning with grounded visual understanding. RSVP is a two-stage structuralized framework that integrates reasoning-driven localization with segmentation refinement. In the reasoning stage, RSVP employs multimodal chain-of-thought visual prompts to help MLLMs understand queries and infer targets, generating interpretable region proposals that enhance visual grounding. In the segmentation stage, RSVP refines these proposals with a Vision-Language Segmentation Module (VLSM), which seamlessly integrates textual and visual cues to produce precise segmentation masks. By explicitly modeling the interaction between multimodal reasoning and segmentation, RSVP introduces a new paradigm for interpretable reasoning segmentation. It exploits MLLMs' inherent localization capabilities, enabling the models to not only reason about objects but also generate structured visual representations. Our extensive experiments demonstrate that RSVP achieves state-of-the-art performance, surpasses state-of-the-art methods by up to +6.5 gIoU and +9.2 cIoU on ReasonSeg, and achieves 49.7 mAP on SegInW under zero-shot settings. These results validate RSVP as an effective and scalable framework for integrating cognitive reasoning with structured visual understanding. © 2025 Association for Computational Linguistics.
Loading