Cultural Value Differences of LLMs: Prompt, Language, and Model Size

ACL ARR 2024 June Submission10 Authors

04 Jun 2024 (modified: 22 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Our study aims to identify behavior patterns in cultural values exhibited by large language models (LLMs). The studied variants include question ordering, prompting language, and model size. Our experiments reveal that each tested LLM can efficiently behave with different cultural values. More interestingly: (i) LLMs exhibit relatively consistent cultural values when presented with prompts in a single language. (ii) The prompting language e.g., Chinese or English, can influence the expression of cultural values. The same question can elicit divergent cultural values when the same LLM is queried in a different language. (iii) Differences in sizes of the same model (e.g., Llama2-7B vs 13B vs 70B) have a more significant impact on their demonstrated cultural values than model differences (e.g., Llama2 vs Mixtral). Our experiments reveal that query language and model size of LLM are the main factors resulting in cultural value differences.
Paper Type: Long
Research Area: Computational Social Science and Cultural Analytics
Research Area Keywords: Large Language Model, Cultural Values
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Surveys
Languages Studied: English,Chinese
Submission Number: 10
Loading