Language Query based Transformer with Multi-Scale Cross-Modal Alignment for Visual Grounding on Remote Sensing Images

Published: 30 May 2024, Last Modified: 12 Jun 2024OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: Visual grounding for remote sensing images (RSVG) aims to localize the referred objects in the remote sensing (RS) images according to a language expression. Existing methods tend to align visual and text features followed by concatenation and then employ a fusion Transformer to learn a token representation for final target localization. However, simple fusion Transformer structure fails to sufficiently learn the location representation of referred object from the multi-modal features. Inspired by the detection Transformer, in this paper, we propose a novel language query based Transformer framework for RSVG termed LQVG. Specifically, we adopt the extracted sentence-level text features as the queries, called language queries, to retrieve and aggregate representation information of the referred object from the multi- scale visual features in the Transformer decoder. The language queries are then converted into object embeddings for final coordinate prediction of referred object. Besides, a multi-scale cross-modal alignment module is devised before the multimodal Transformer to enhance the semantic correlation between the visual and text features, thus facilitating the cross-modal decoding process to generate more precise object representation. Moreover, a new RSVG dataset named RSVG-HR is built to evaluate the performance of the RSVG approaches on very high-resolution remote sensing images with inconspicuous objects. Experimental results on two benchmark datasets demonstrate that our pro- posed method significantly surpasses the comparison methods and achieves state-of-the-art performance.
Loading