Cross-lingual QA: A Key to Unlocking In-context Cross-lingual Performance

Published: 18 Jun 2024, Last Modified: 16 Jul 2024ICML 2024 Workshop ICL PosterEveryoneRevisionsBibTeXCC BY 4.0
Track: short paper (up to 4 pages)
Keywords: Multilingual large language models, MLLMs, Cross-lingual prompting, QA
TL;DR: We introduce Cross-lingual QA, a prompting method that translates only the question and answer parts.
Abstract: Multilingual large language models (MLLMs) have demonstrated significant cross-lingual capabilities through in-context learning. Existing approaches typically construct monolingual in-context examples, either in the source or target language. However, translating entire in-context examples into the target language might compromise contextual integrity and be costly in the case of long-context passages. To address this, we introduce Cross-lingual QA, a cross-lingual prompting method that translates only the question and answer parts, thus reducing translation costs. Experiments on four typologically diverse multilingual benchmarks show that Cross-lingual QA prompting effectively stimulates models to elicit their cross-lingual knowledge, outperforming prior monolingual prompting approaches. Furthermore, we show that prompting open-source MLLMs with cross-lingual in-context examples enhances performance as the model scale increases.
Submission Number: 29
Loading