Abstract: Visual grounding is the task of locating objects specified by natural language expressions. Existing methods extend generic object detection frameworks to tackle this task. They typically extract visual and textual features separately using independent visual and textual encoders, then fuse these features in a multi-modal decoder for final prediction. However, visual grounding presents unique challenges. It often involves locating objects with different text descriptions within the same image. Existing methods struggle with this task because the independent visual encoder produces identical visual features for the same image, limiting detection performance. Some recently approaches propose various language-guided visual encoders to address this issue, but they mostly rely solely on textual information and require sophisticated designs. In this paper, we introduce Multi-modal Conditional Adaptation (MMCA), which enables the visual encoder to adaptively update weights, directing its focus towards text-relevant regions. Specifically, we first integrate information from different modalities to obtain multi-modal embeddings. Then we utilize a set of weighting coefficients, which generated from the multimodal embeddings, to reorganize the weight update matrices and apply them to the visual encoder of the visual grounding model. Extensive experiments on four widely used datasets demonstrate that MMCA achieves significant improvements and state-of-the-art results. Ablation experiments further demonstrate the lightweight and efficiency of our method. Our source code is
available at: https://github.com/Mr-Bigworth/MMCA.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Multimodal Fusion
Relevance To Conference: Visual grounding, the task of localizing an object specified by a natural language expression, requires effective fusion of visual and linguistic features. Our proposed method, Multi-modal Conditional Adaptation (MMCA), tackles this challenge by leveraging multi-modal information to guide visual feature extraction in a parameter updating manner. MMCA serves as a plug-in module that can be seamlessly integrated with existing visual grounding methods, enhancing the extraction of text-relevant visual features. Furthermore, we believe this method can be extended to parameter-efficient tuning of large multi-modal models, with the potential to utilize cross-modal information for guiding parameter updates.
Submission Number: 3113
Loading