Multicultural Spyfall: Assessing LLMs through Dynamic Multilingual Social Deduction Game

ACL ARR 2026 January Submission6986 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multilingual, benchmarks, evaluation, game, multicultural
Abstract: The rapid advancement of Large Language Models (LLMs) has necessitated more robust evaluation methods that go beyond static benchmarks, which are increasingly prone to data saturation and leakage. In this paper, we propose a dynamic benchmarking framework for evaluating multilingual and multicultural capabilities through the social deduction game Spyfall. In our setup, models must engage in strategic dialogue to either identify a secret agent or avoid detection, utilizing culturally relevant locations or local foods. Our results show that our game-based rankings align closely with the Chatbot Arena. However, we find a significant performance gap in non-English contexts: models are generally less proficient when handling locally specific entities and often struggle with rule-following or strategic integrity in non-English languages. We demonstrate that this game-based approach provides a scalable, leakage-resistant, and culturally nuanced alternative to traditional NLP benchmarks.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: multilingual benchmarks,evaluation,evaluation methodologies
Contribution Types: NLP engineering experiment
Languages Studied: Indonesian,Egyptian Arabic,Chinese
Submission Number: 6986
Loading