An Unsupervised Multiple-Task and Multiple-Teacher Model for Cross-lingual Named Entity RecognitionDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: Cross-lingual named entity recognition task is one of the critical problem for evaluating the potential transfer learning techniques on low resource languages. Knowledge distillation using pre-trained multilingual language models between source and target languages have shown their superiority. However, existing cross-lingual distillation models merely consider the potential transferability between two identical single tasks across both domain. Other possible auxiliary tasks to improve the learning performance have not been fully investigated. In this study, based on the knowledge distillation framework and multi-task learning, we introduce the similarity metric model as an auxiliary task to improve the cross-lingual NER performance on target domain. Specifically, an entity recognizer and a similarity evaluator teachers are first trained in parallel from the source domain. Then, two tasks in the student model are supervised by the two teachers simultaneously. Empirical studies on the datasets across 7 different languages confirm the effectiveness of the proposed model.
0 Replies

Loading