Parameter-Efficient Transfer Learning for Remote Sensing Image-Text Retrieval

Published: 2023, Last Modified: 14 Nov 2024IEEE Trans. Geosci. Remote. Sens. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vision-and-language pretraining (VLP) models have experienced a surge in popularity recently. By fine-tuning them on specific datasets, significant performance improvements have been observed in various tasks. However, full fine-tuning of VLP models not only consumes a significant amount of computational resources but also has a significant environmental impact. Moreover, as remote sensing (RS) data are constantly being updated, full fine-tuning may not be practical for real-world applications. To address this issue, in this work, we investigate the parameter-efficient transfer learning (PETL) method to effectively and efficiently transfer visual–language knowledge from the natural domain to the RS domain on the image–text retrieval task. To this end, we make the following contributions. First, we construct a novel and sophisticated PETL framework for the RS image–text retrieval (RSITR) task, which includes the pretrained CLIP model, a multimodal RS adapter, and a hybrid multimodal contrastive (HMMC) learning objective. Second, to deal with the problem of high intramodal similarity in RS data, we design a simple yet effective HMMC loss. Third, we provide comprehensive empirical studies for PETL-based RSITR (PE-RSITR). Our results demonstrate that the proposed method is promising and of great potential for practical applications. Fourth, we benchmark extensive state-of-the-art (SOTA) PETL methods on the RSITR task. Our proposed model only contains 0.16 M training parameters, which can achieve a parameter reduction of 98.9% compared with full fine-tuning, resulting in substantial savings in training costs. Our retrieval performance exceeds traditional methods by 7%–13% and achieves comparable or better performance than full fine-tuning. This work can provide new ideas and useful insights for RS vision–language (VL) tasks.
Loading