Abstract: Large language models (LLMs) pretrained primarily on English data often reflect Western-centric biases, limiting their effectiveness in diverse cultural contexts. While some work has explored cultural alignment, the potential for cross-cultural transfer, using alignment in one culture to improve performance in others, remains underexplored. This paper investigates cross-cultural transfer in the Arab world, where linguistic and historical similarities coexist with local cultural differences. Using a culturally grounded commonsense reasoning dataset covering 13 Arab countries, we evaluate lightweight alignment methods such as in-context learning (ICL) and demonstration-based reinforcement (DITTO), alongside baselines like instruction fine-tuning (IFT) and Direct Preference Optimization (DPO). Our results show that just 12 culture-specific examples from one country can improve performance in others by 15–20\% on average. These findings demonstrate that efficient cross-cultural alignment is possible and offer a promising approach to reducing Western bias in LLMs while advancing culturally fair NLP in low-resource settings.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Cultural alignment, cross cultural transfer, ICL, SFT, DPO
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data analysis
Languages Studied: Arabic, Indonesian
Submission Number: 8031
Loading