ALIGN: Word Association Learning for Cross-Cultural Generalization in Large Language Models

ACL ARR 2025 July Submission1321 Authors

29 Jul 2025 (modified: 27 Aug 2025)ACL ARR 2025 July SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As large language models (LLMs) increasingly mediate cross-cultural communication, their behavior still reflects the distributional bias of the languages and viewpoints that are over-represented in their pre-training corpora. Yet, it remains a challenge to model and align culture due to limited cultural knowledge and a lack of exploration into effective learning approaches. We introduce a cost-efficient, cognitively grounded remedy: parameter-efficient fine-tuning on native speakers’ free \emph{word–association} norms, which encode implicit cultural schemas. Leveraging English-US and Mandarin associations from the Small-World-of-Words project, we adapt \textsc{Llama-3.1-8B} and \textsc{Qwen-2.5-7B} via supervised fine-tuning (SFT) and PPO-based preference optimization. SFT boosts held-out association Precision@5 by 16–20 \% in English and 43–165 \% in Mandarin, lifts median concreteness by +0.20, and attains human-level valence and arousal. These lexical gains transfer: on World-Values-Survey questions, fine-tuned models shift answer distributions toward the target culture, and on a 50-item high-tension subset, Qwen’s Chinese-aligned responses double (13 → 25) while Llama’s US bias drops by one-third (20 → 24). Our 7–8B models rival or beat vanilla 70B baselines, showing that a few million culture-grounded associations can instill value alignment without costly retraining. Our work highlights both the promise and the need for future research grounded in human cognition in improving cultural alignment in AI models.\footnote{All code and data will be released upon acceptance.}
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: cultural alignment, cultural word associations, efficient cultural learning
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 1321
Loading