Abstract: Neural machine translation (NMT) systems exhibit limited robustness in handling source-side linguistic variations. Their performance tends to degrade when faced with even slight deviations in language usage, such as different domains or variations introduced by second-language speakers. It is intuitive to extend this observation to encompass dialectal variations as well, but the work allowing the community to evaluate MT systems on this dimension is limited. To alleviate this issue, we compile and release CODET, a contrastive dialectal benchmark encompassing 891 different variations from twelve different languages. We also quantitatively demonstrate the challenges large MT models face in effectively translating dialectal variants. All the data and code will be released upon acceptance.
Paper Type: long
Research Area: Machine Translation
Contribution Types: Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: Italian, Swiss German, Basque, Arabic, Bengali, Central Kurdish, Farsi, Malay, Indonesian, Swahili, Tigrinya, Aranese, Central Occitan, Griko
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies
Loading