mAggretriever: A Simple yet Effective Approach to Zero-Shot Multilingual Dense Retrieval

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Information Retrieval and Text Mining
Keywords: Multilingual Dense Retrieval, Zero-Shot Language Transferability, Lexical and Semantic Matching
TL;DR: We introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformer, and demonstrate its strong zero-shot multilingual retrieval capability.
Abstract: Multilingual information retrieval (MLIR) is a crucial yet challenging task due to the need for human annotations in multiple languages, making training data creation labor-intensive. In this paper, we introduce mAggretriever, which effectively leverages semantic and lexical features from pre-trained multilingual transformers (e.g., mBERT and XLM-R) for dense retrieval. To enhance training and inference efficiency, we employ approximate masked-language modeling prediction for computing lexical features, reducing 70--85\% GPU memory requirement for mAggretriever fine-tuning. Empirical results demonstrate that mAggretriever, fine-tuned solely on English training data, surpasses existing state-of-the-art multilingual dense retrieval models that undergo further training on large-scale MLIR training data. Our code is available at url.
Submission Number: 2186
Loading