ResVG: Enhancing Relation and Semantic Understanding in Multiple Instances for Visual Grounding

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Visual grounding aims to localize the object referred to in an image based on a natural language query. Although progress has been made recently, accurately localizing target objects within multiple-instance distractions (multiple objects of the same category as the target) remains a significant challenge. Existing methods demonstrate a significant performance drop when there are multiple distractions in an image, indicating an insufficient understanding of the fine-grained semantics and spatial relationships between objects. In this paper, we propose a novel approach, the Relation and Semantic-sensitive Visual Grounding (ReSVG) model, to address this issue. Firstly, we enhance the model's understanding of fine-grained semantics by injecting semantic prior information derived from text queries into the model. This is achieved by leveraging text-to-image generation models to produce images representing the semantic attributes of target objects described in queries. Secondly, we tackle the lack of training samples with multiple distractions by introducing a relation-sensitive data augmentation method. This method generates additional training data by synthesizing images containing multiple objects of the same category and pseudo queries based on their spatial relationships. The proposed ReSVG model significantly improves the model's ability to comprehend both object semantics and spatial relations, leading to enhanced performance in visual grounding tasks, particularly in scenarios with multiple-instance distractions. We conduct extensive experiments to validate the effectiveness of our methods on five datasets.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: This submission under the scope of multimedia content Understanding. Visual grounding aims to localize the object referred to in an image based on a natural language query, which requires the sytem bridging vision and language. In this paper, we propose the Relation and Semantic-sensitive Visual Grounding model to tackle the multiple-instance distractions (multiple objects of the same category as the target) in visual grounding tasks. Existing methods demonstrate a significant performance drop when there are multiple distractions in an image, indicating an insufficient understanding of the fine-grained semantics and spatial relationships between objects. We propose to enhance the model's understanding of fine-grained semantics by injecting semantic prior information derived from text queries into the model and introducing a relation-sensitive data augmentation to address the problem of insufficient understanding of spatial relationships between objects. Experiments on five datasets demonstrate the effectiveness of our method.
Supplementary Material: zip
Submission Number: 5385
Loading