Do Large Language Models Speak All Languages Equally? A Comparative Study in Low-Resource Settings

ACL ARR 2024 April Submission545 Authors

16 Apr 2024 (modified: 01 Jun 2024)ACL ARR 2024 April SubmissionEveryone, Ethics ReviewersRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have garnered significant interest in natural language processing (NLP), particularly their remarkable performance in various downstream tasks in resource-rich languages. Recent studies have highlighted the limitations of LLMs in low-resource languages, primarily focusing on binary classification tasks and giving minimal attention to South Asian languages. These limitations are primarily attributed to constraints such as dataset scarcity, computational costs, and research gaps specific to low-resource languages. To address this gap, we present datasets for sentiment and hate speech tasks by translating from English to Bangla, Hindi, and Urdu, facilitating research in low-resource language processing. Further, we comprehensively examine zero-shot learning using multiple LLMs in English and widely spoken South Asian languages. Our findings indicate that GPT-4 consistently outperforms Llama 2 and Gemini, with English consistently demonstrating superior performance across diverse tasks compared to low-resource languages. Furthermore, our analysis reveals that natural language inference (NLI) exhibits the highest performance among the evaluated tasks, with GPT-4 demonstrating superior capabilities.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: Large Language Models, Natural Language Inference, Sentiment Analysis, Hate speech, Model Evaluations
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: Bangla, English, Hindi, Urdu
Submission Number: 545
Loading