Enhancing Situated Cultural Reasoning in Large Language Models via Simulation Learning

ACL ARR 2026 January Submission9394 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: cultural alignment, large language model, situated reasoning, simulation learning
Abstract: As large language models(LLMs) are increasingly deployed in real-world applications, they inevitably encounter culturally sensitive scenarios that require applying cultural knowledge or values to the context in order to generate appropriate responses. However, existing research largely treats culture as static knowledge or abstract values, leaving the application of cultural norms in situated interactions underexplored. To address this gap, we first focus on the situated cultural reasoning of LLMs by proposing \textbf{CuSiR}, a simulation learning framework combining simulated scenario and reinforcement learning. We construct datasets covering both knowledge-based and social scenarios and conduct experiments across multiple role perspectives, instructions, settings, and models. Our results indicate that this framework effectively enhances LLMs' ability to apply existing cultural knowledge, thereby improving their performance on tasks set in cultural scenarios. All code is provided in the supplement materials and will be publicly available online.
Paper Type: Long
Research Area: Computational Social Science, Cultural Analytics, and NLP for Social Good
Research Area Keywords: language/cultural bias analysis, NLP tools for social analysis
Contribution Types: NLP engineering experiment, Reproduction study, Data resources
Languages Studied: English
Submission Number: 9394
Loading