Abstract: Multilingual humans can and do seamlessly switch back and forth between languages when communicating. However, multilingual (machine) translation models are not robust to such sudden changes. In this work, we explore the robustness of multilingual MT models to language switching and propose checks to measure switching capability. We also investigate simple and effective data augmentation methods that can enhance robustness. A glass-box analysis of attention modules demonstrates the effectiveness of these methods in improving robustness.
Paper Type: long
0 Replies
Loading