Pre-training Cross-lingual Open Domain Question Answering with Large-scale Synthetic Supervision

ACL ARR 2024 June Submission1059 Authors

14 Jun 2024 (modified: 13 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Cross-lingual open domain question answering (CLQA) is a complex problem, comprising cross-lingual retrieval from a multilingual knowledge base, followed by answer generation in the query language. Both steps are usually tackled by separate models, requiring substantial annotated datasets, and typically auxiliary resources, like machine translation systems to bridge between languages. In this paper, we show that CLQA can be addressed using a single encoder-decoder model. To effectively train this model, we propose a self-supervised method based on exploiting the cross-lingual link structure within Wikipedia. We demonstrate how linked Wikipedia pages can be used to synthesise supervisory signals for cross-lingual retrieval, through a form of cloze query, and generate more natural questions to supervise answer generation. Together, we show our approach, $\texttt{CLASS}$, outperforms comparable methods on both supervised and zero-shot language adaptation settings, including those using machine translation.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: multilingual QA; open-domain QA
Contribution Types: NLP engineering experiment
Languages Studied: Arabic, Bengali, Finnish, Japanese, Korean, Russian, Telugu
Submission Number: 1059
Loading