Improving LLM Abilities in Idiomatic Translation

ACL ARR 2024 August Submission294 Authors

16 Aug 2024 (modified: 12 Sept 2024)ACL ARR 2024 August SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: For large language models (LLMs) like NLLB and GPT, translating idioms remains a challenge. Our goal is to enhance translation fidelity by improving LLM processing of idiomatic language while preserving the original linguistic style. This preserves cultural nuances and ensures translated texts retain their intent and emotional resonance, fostering better cross-cultural communication. Previous work has utilized knowledge bases like IdiomKB by providing the LLM with the meaning of an idiom to use in translation. Although this method yielded better results than a direct translation, it is still limited in its ability to preserve idiomatic writing style across languages. In this research, we expand upon the knowledge base to find corresponding idioms in the target language. Our research performs translations using two novel methods: The first method employs the SentenceTransformers model to semantically generate cosine similarity scores between the meanings of the original and target language idioms, selecting the best idiom (Cosine Similarity method). The second method uses an LLM to find a corresponding idiom in the target language for use in the translation (LLM-generated idiom method). As a baseline, we performed a direct translation without providing additional information. Human evaluations on the English -> Chinese, Chinese -> English, and Hindi -> English show the Cosine Similarity Lookup method outperformed others in all GPT4o translations. To further build upon IdiomKB, we developed a low-resource Urdu dataset and Hindi dataset containing idioms and their translations. Despite dataset limitations, the Cosine Similarity Lookup method shows promise, potentially overcoming language barriers and enabling the exploration of diverse literary works in Chinese, Urdu, and Hindi. For access to the code and replication of our experiments, please visit our Github.
Paper Type: Long
Research Area: Multilingualism and Cross-Lingual NLP
Research Area Keywords: English, Mandarin, Urdu, Hindi, Low Resource Languages, Cosine similarity lookup, idiomatic translation, LLM Generated idioms
Contribution Types: Approaches to low-resource settings, Data resources, Data analysis
Languages Studied: English, Mandarin, Hindi, Urdu
Submission Number: 294
Loading