Keywords: Large Language Models, Multilingual Benchmarks, Instruction Following, Instruction Constraints, Less-resourced Languages
TL;DR: We introduce XIFBench, a constraint-based benchmark for evaluating multilingual instruction-following in LLMs across languages with diverse resource levels.
Abstract: Large Language Models (LLMs) have demonstrated remarkable instruction-following capabilities across various applications. However, their performance in multilingual settings lacks systematic investigation, with existing evaluations lacking fine-grained constraint analysis across diverse linguistic contexts. We introduce **XIFBench**, a comprehensive constraint-based benchmark for evaluating multilingual instruction-following abilities of LLMs, comprising 558 instructions with 0-5 additional constraints across five categories (*Content*, *Style*, *Situation*, *Format*, and *Numerical*) in six languages spanning different resource levels. To support reliable and consistent cross-lingual evaluation, we implement three methodological innovations: cultural accessibility annotation, constraint-level translation validation, and requirement-based evaluation using English requirements as semantic anchors across languages. Extensive experiments with various LLMs not only quantify performance disparities across resource levels but also provide detailed insights into how language resources, constraint categories, instruction complexity, and cultural specificity influence multilingual instruction-following. Our code and data are available at https://github.com/zhenyuli801/XIFBench.
Croissant File: json
Dataset URL: https://doi.org/10.7910/DVN/9EBYA4
Code URL: https://github.com/zhenyuli801/XIFBench
Primary Area: Evaluation (e.g., data collection methodology, data processing methodology, data analysis methodology, meta studies on data sources, extracting signals from data, replicability of data collection and data analysis and validity of metrics, validity of data collection experiments, human-in-the-loop for data collection, human-in-the-loop for data evaluation)
Submission Number: 1640
Loading