Keywords: MLLM; Referring Image Segmentation; Referring Expression Segmentation
Abstract: Referring Expression Comprehension and Segmentation are critical tasks for assessing the integration of language understanding and image comprehension, serving as benchmarks for Multimodal Large Language Models (MLLMs) capabilities.
To address these challenges, we propose a new strategy, CoT Referring, which enhances model reasoning across modalities through a structured, chain-of-thought training data structure.
Our approach systematically parses textual structures to a sequential referring step, where in each step it identifies relationships and ensures consistent reference alignment, thereby improving accuracy in complex query scenarios.
We restructure the training data to enforce a new output form, providing new annotations for existing datasets and compiling an evaluation benchmark from existing resources. This benchmark is designed explicitly for complex referring cases.
We also integrate detection and segmentation capabilities into a unified MLLM framework, training it with a novel adaptive weighted loss to optimize performance.
Experimental results on our curated benchmark and the RefCOCO/+/g demonstrate the effectiveness of our approach, with a notable increase of 2.5\%+ over baseline models.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 6672
Loading