Leveraging Large Language Models for Code-Mixed Data Augmentation in Sentiment Analysis

Published: 14 Nov 2024, Last Modified: 23 May 20252024 EMNLP Second Workshop on Social Influence in Conversations (SICon)EveryoneCC BY 4.0
Abstract: Code-mixing (CM), where speakers blend languages within a single expression, is prevalent in multilingual societies but poses challenges for natural language processing due to its complexity and limited data. We propose using a large language model to generate synthetic CM data, which is then used to enhance the performance of task-specific models for CM sentiment analysis. Our results show that in SpanishEnglish, synthetic data improved the F1 score by 9.32%, outperforming previous augmentation techniques. However, in MalayalamEnglish, synthetic data only helped when the baseline was low; with strong natural data, additional synthetic data offered little benefit. Human evaluation confirmed that this approach is a simple, cost-effective way to generate natural sounding CM sentences, particularly beneficial for low baselines. Our findings suggest that few-shot prompting of large language models is a promising method for CM data augmentation and has significant impact on improving sentiment analysis, an important element in the development of social influence systems.
Loading