English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer TooDownload PDF

Anonymous

03 Jun 2020 (modified: 03 Jun 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Keywords: zero-shot, cross-lingual, transfer learning
TL;DR: We investigate if English intermediate-task training helps non-English target tasks in a zero-shot cross-lingual setting, i.e. training and testing on the same task but on different languages.
Abstract: Intermediate-task training has been shown to substantially improve pretrained model performance on many language understanding tasks, at least in monolingual English settings. Here, we investigate whether English intermediate-task training is still helpful on non-English target tasks in a zero-shot cross-lingual setting. Using a set of 7 intermediate language understanding tasks, we evaluate intermediate-task transfer in a zero-shot cross-lingual setting on 9 target tasks from the XTREME benchmark. Intermediate-task training yields large improvements on the BUCC and Tatoeba tasks that use model representations directly without training, and moderate improvements on question-answering target tasks. Using SQuAD for intermediate training achieves the best results across target tasks, with an average improvement of 8.4 points on development sets. Selecting the best intermediate task model for each target task, we obtain a 6.1 point improvement over XLM-R Large on the XTREME benchmark, setting a new state of the art. Finally, we show that neither multi-task intermediate-task training nor continuing multilingual MLM during intermediate-task training offer significant improvements.
0 Replies

Loading