EXECUTE: A Multilingual Benchmark for LLM Token Understanding

ACL ARR 2025 February Submission3709 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The CUTE benchmark showed that LLMs struggle with character understanding in English. We extend it to more languages with diverse scripts and writing systems, introducing EXECUTE. Our simplified framework allows easy expansion to any language. Tests across multiple LLMs reveal that challenges in other languages are not always on the character level as in English. Some languages show word-level processing issues, some show no issues at all. We also examine sub-character tasks in Chinese, Japanese, and Korean to assess LLMs' understanding of character components.
Paper Type: Short
Research Area: Language Modeling
Research Area Keywords: robustness, prompting, subword representations, multilingualism
Contribution Types: Data resources
Languages Studied: Amharic, Arabic, Chinese, English, Hindi, Japanese, Korean, Russian
Submission Number: 3709
Loading