Vision-Aware Text Features in Referring Image Segmentation: From Object Understanding to Context Understanding
Abstract: Referring image segmentation is a challenging task that involves generating pixel-wise segmentation masks based on natural language descriptions. The complexity of this task increases with the intricacy of the sentences provided.
Existing methods have relied mostly on visual features to generate the segmentation masks while treating text features as supporting components.
However, this under-utilization of text understanding limits the model's capability to fully comprehend the given expressions.
In this work, we propose a novel framework that specifically emphasizes object and context comprehension inspired by human cognitive processes through Vision-Aware Text Features.
Firstly, we introduce a CLIP Prior module to localize the main object of interest and embed the object heatmap into the query initialization process.
Secondly, we propose a combination of two components: Contextual Multimodal Decoder and Meaning Consistency Constraint, to further enhance the coherent and consistent interpretation of language cues with the contextual understanding obtained from the image.
Our method achieves significant performance improvements on three benchmark datasets RefCOCO, RefCOCO+ and G-Ref.
Loading