Upper-bound Translation Performance of Llama2Download PDF

Anonymous

16 Oct 2023ACL ARR 2023 October Blind SubmissionReaders: Everyone
Abstract: Large Language Models (LLMs) demonstrate state-of-the-art results across multiple tasks, but machine translation remains a challenging task. Our work explores the translation capability of Llama-2-7b-chat and Llama-2-13b-chat under an idealized setup, where all the information needed to generate the correct translation is given to the model. We create an artificial language to help us achieve this goal while also helping us investigate factors affecting these models' performance. Our findings show that Llama-2-13b-chat exhibits strong translation abilities, surpassing 92% of supervised NMT English to XX translations BLEU wise and 85% chrF++ wise. This work underscores the potential of LLMs as translators and gives insight into the necessary resources needed to achieve their full potential.
Paper Type: short
Research Area: Machine Translation
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Publicly available software and/or pre-trained models
Languages Studied: English
Consent To Share Submission Details: On behalf of all authors, we agree to the terms above to share our submission details.
0 Replies

Loading