Abstract: In the absence of sizable training data for most world languages and NLP tasks, translation-based strategies such as translate-test---evaluating on noisy source language data translated from the target language---and translate-train---training on noisy target language data translated from the source language---have been established as competitive approaches for cross-lingual transfer (XLT).
For token classification tasks, these strategies require label projection: mapping the labels from each token in the original sentence to its counterpart(s) in the translation. To this end, it is common to leverage multilingual word aligners (WAs) derived from encoder language models such as mBERT or LaBSE. Despite obvious associations between machine translation (MT) and WA, research on extracting alignments with MT models is largely limited to exploiting cross-attention in encoder-decoder architectures, yielding poor WA results.
In this work, in contrast, we propose TransAlign, a novel word aligner that utilizes the encoder of a massively multilingual MT model. We show that TransAlign not only achieves strong WA performance but substantially outperforms popular WA and state-of-the-art non-WA-based label projection methods in MT-based XLT for token classification.
Paper Type: Short
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: cross-lingual transfer, less-resourced languages
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: Bambara, Ewé, Fon, Hausa, Igbo, Kinyarwanda, Luganda, Luo, Mossi, Chichewa, chiShona, Kiswahili, Setswana, Akan/Twi, Wolof, isiXhosa, Yorùrbá, isiZulu, Arabic, Danish, German, South-Tyrolean, Indonesian, Italian, Kazakh, Dutch, Turkish, Chinese
Keywords: translation-based cross-lingual transfer, token classification, machine translation
Submission Number: 1015
Loading