MonCulture-Eval: A Hierarchical Benchmark for Evaluating Mongolian Cultural Capabilities of Large Language Models across Scripts and Regions
Keywords: Large Language Models, Low-Resource Languages, Evaluation Benchmark, Cultural Alignment, Mongolian, Multilingual NLP.
Abstract: While Large Language Models (LLMs) have achieved impressive linguistic fluency in low-resource languages, their ability to grasp deep cultural nuances remains under-explored. This paper introduces MonCulture-Eval, a comprehensive benchmark designed to evaluate the Cultural Intelligence of LLMs in Mongolian across two distinct writing systems (Traditional and Cyrillic) and three major regional sub-cultures (Alxa, Ordos, and Horqin). Constructed via an "Indigenous-First" approach, the benchmark is structured into a three-layer cognitive framework—Factual, Situational, and Values—complemented by specialized tasks including Riddles, Taboos, and Proverbs. Evaluating state-of-the-art models (GPT-5.2, Gemini-3-pro, DeepSeek-v3.2, and Claude-Sonnet-4-5) reveals significant limitations in current systems. First, we identify a severe "Script Gap," where most models perform significantly worse in Traditional Mongolian, effectively cutting them off from deep cultural archives. Second, qualitative analysis uncovers a prevalent "Tourist Perspective" (Etic Bias): models frequently sanitize spiritual rituals into secular safety regulations and hallucinate functional rationales for symbolic taboos. Notably, Gemini-3-pro demonstrates exceptional "Emic" alignment in the Values layer, while others exhibit a sharp "Cognitive Depth Drop-off." Our findings underscore that translation capability does not equate to cultural understanding, highlighting the urgent need for culturally-aware alignment strategies in inclusive AI.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Resources and Evaluation, Low-resource NLP, Datasets, Evaluation Methodologies, Multilingual NLP, Ethics, Large Language Models
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: Mongolian
Submission Number: 9920
Loading