Ready to Translate, Not to Represent? Bias and Performance Gaps in Multilingual LLMs Across Language Families and Domains
Abstract: The rise of Large Language Models (LLMs) has redefined Machine Translation (MT), enabling context-aware and fluent translations across hundreds of languages and textual domains. Despite their remarkable capabilities, LLMs often exhibit uneven performance across language families and specialized domains. Moreover, recent evidence reveals that these models can encode and amplify different biases present in their training data, posing serious concerns for fairness, especially in low-resource languages. To address these gaps, we introduce Translation Tangles, a unified framework and dataset for evaluating the translation quality and fairness of open-source LLMs. Our approach benchmarks 24 bidirectional language pairs across multiple domains using different metrics. We further propose a hybrid bias detection pipeline that integrates rule-based heuristics, semantic similarity filtering, and LLM-based validation. We also introduce a high-quality, bias-annotated dataset based on human evaluations of 1,439 translation-reference pairs. The code and dataset are accessible on GitHub: https://anonymous.4open.science/r/TranslationTangles-EABE/
Paper Type: Long
Research Area: Machine Translation
Research Area Keywords: multilingual MT, few-shot/zero-shot MT, benchmarking, biases, human evaluation, NLP datasets
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English, Czech, German, French, Russian, Finnish, Lithuanian, Estonian, Gujarati, Kazakh, Bangla, Chinese, Turkish
Previous URL: https://openreview.net/forum?id=A8t5YqOd9f
Explanation Of Revisions PDF: pdf
Reassignment Request Area Chair: No, I want the same area chair from our previous submission (subject to their availability).
Reassignment Request Reviewers: No, I want the same set of reviewers from our previous submission (subject to their availability)
Data: zip
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: N/A
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 4, Appendix C
B2 Discuss The License For Artifacts: N/A
B3 Artifact Use Consistent With Intended Use: N/A
B4 Data Contains Personally Identifying Info Or Offensive Content: N/A
B5 Documentation Of Artifacts: N/A
B6 Statistics For Data: Yes
B6 Elaboration: Section 6
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 4
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 3, Section 4
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 5
C4 Parameters For Packages: Yes
C4 Elaboration: Section 3, Section 4
D Human Subjects Including Annotators: Yes
D1 Instructions Given To Participants: Yes
D1 Elaboration: Section 6
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: N/A
Author Submission Checklist: yes
Submission Number: 836
Loading