Abstract: Recent work in cross-language information retrieval (CLIR), where
queries and documents are in different languages, has shown the
benefit of the Translate-Distill framework that trains a cross-language
neural dual-encoder model using translation and distillation. However, Translate-Distill only supports a single document language.
Multilingual information retrieval (MLIR), which ranks a multilingual document collection, is harder to train than CLIR because the
model must assign comparable relevance scores to documents in
different languages. This work extends Translate-Distill and propose Multilingual Translate-Distill (MTD) for MLIR. We show that
ColBERT-X models trained with MTD outperform their counterparts trained with Multilingual Translate-Train, which is the previous state-of-the-art training approach, by 5% to 25% in nDCG@20
and 15% to 45% in MAP. We also show that the model is robust to the
way languages are mixed in training batches. Our implementation
is available on GitHub.
Loading