Abstract: Multilingual pre-trained language models (PLMs) facilitate zero-shot cross-lingual transfer from rich-resource languages to low-resource languages in extractive question answering (QA) tasks. However, during fine-tuning on the QA task, the syntactic information of languages in multilingual PLMs is not always preserved or even is forgotten, which may influence the detection of answer spans for low-resource languages. In this paper, we propose an auxiliary task to predict syntactic graphs to enhance syntax information in the fine-tuning stage of the QA task to improve the answer span detection of low-resource. The syntactic graph includes Part-of-Speech (POS) information and syntax tree information without dependency parse label. We convert the syntactic graph prediction task into two subtasks to adapt the sequence input of PLMs: POS tags prediction task and syntax tree prediction task (including depth prediction of a word and distance prediction of two words). Moreover, to improve the alignment between languages, we parallel train the source language and target languages syntactic graph prediction task. Extensive experiments on three multilingual QA datasets show the effectiveness of our proposed approach.
0 Replies
Loading