Beyond Monolingual Assumptions: A Survey on Code-Switched NLP in the Era of Large Language Models across Modalities
Keywords: Code-Mixing, Code-Switching, Survey, LLMs, Low-resource languages
Abstract: Amidst the rapid advances of large language models (LLMs), most LLMs still struggle with mixed-language inputs, limited Code-switching (CSW) datasets, and evaluation biases, which hinder their deployment in multilingual societies. This survey provides the first comprehensive analysis of CSW-aware LLM research, reviewing 327 studies spanning five research areas, 15+ NLP tasks, 30+ datasets, and 80+ languages. We classify recent advances by architecture, training strategy, and evaluation methodology, outlining how LLMs have reshaped CSW modelling and what challenges persist. The paper concludes with a roadmap emphasizing the need for inclusive datasets, fair evaluation, and linguistically grounded models to achieve truly multilingual intelligence. A curated collection of all resources is maintained at https://anonymous.4open.science/r/awesome-code-mixing/.
Paper Type: Long
Research Area: Multilinguality and Language Diversity
Research Area Keywords: Code-Mixing, Code-Switching, Survey, LLMs, Low-resource languages
Contribution Types: Position papers, Surveys
Languages Studied: Arabic, Bengali, Cantonese, Chinese, Darija (Moroccan), English, French, German, Hindi, isiZulu, Kannada, Kazakh, Korean, Malayalam, Mandarin, Marathi, Roman-Urdu, Russian, Sinhala, Spanish, Tamil, Ukrainian, Urdu, Vietnamese, Yoruba, code-mixing.
Submission Number: 4844
Loading