Keywords: Referring Expression Segmentation, Synthetic data, Multimodal augmentation
Abstract: Despite the advances in Referring Expression Segmentation (RES) benchmarks, their evaluation protocols remain constrained, primarily focusing on either single targets with short queries (containing minimal attributes) or multiple targets from distinctly different queries on a single domain. This limitation significantly hinders the assessment of more complex reasoning capabilities in RES models.
We introduce WildRES, a novel benchmark that incorporates long queries with diverse attributes and non-distinctive queries for multiple targets. This benchmark spans diverse application domains, thus enabling more rigorous evaluation of complex reasoning capabilities in real-world settings.
Our analysis reveals that existing RES models demonstrate substantial performance deterioration when evaluated on WIldRES. To address this challenge, we introduce SynRES, an automated pipeline generating densely paired compositional synthetic training data through three innovations: (1) a dense caption-driven synthesis for attribute-rich image-mask-expression triplets, (2) reliable semantic alignment mechanisms rectifying caption-pseudo mask inconsistencies via Image-Text Aligned Grouping, and (3) domain-aware augmentations incorporating mosaic composition and superclass replacement to emphasize generalization ability and distinguishing attributes over object categories.
Experimental results demonstrate that models trained with SynRES achieve consistent improvements on not only our complex WildRES benchmark but also classic RES benchmarks (e.g. RefCOCO/+/g).
Code is available at https://anonymous.4open.science/r/SynRES-Review-4B1F.
Dataset will be available upon acceptance.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 14417
Loading