Ticket-Bench: A Kickoff for Multilingual and Regionalized Agent Evaluation

ICLR 2026 Conference Submission18967 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM Agent Evaluation, Multilingual, Function calling evaluation.
TL;DR: Ticket-bench is a benchmark that tests llm agents across six languages, our results reveal strong overall results but notable gaps between languages.
Abstract: Large language models (LLMs) are increasingly deployed as task-oriented agents, where success depends on their ability to generate accurate function calls under realistic, multilingual conditions. However, existing agent evaluations largely overlook cultural and linguistic diversity, often relying on monolingual or naively translated benchmarks. We introduce Ticket-Bench, a benchmark for multilingual agent evaluation in task-oriented scenarios. Ticket-Bench simulates the domain of soccer ticket purchases across six major languages-Portuguese, English, Spanish, German, Italian, and French-using localized teams, cities, and user profiles to provide a higher level of realism. We evaluate a wide range of commercial and open-source LLMs, measuring function-calling accuracy and consistency across languages. Results show that reasoning-oriented models (e.g., GPT-5, Qwen3-235B) dominate performance but still exhibit notable cross-lingual disparities. These findings underscore the need for culturally aware, multilingual benchmarks to guide the development of robust LLM agents.
Primary Area: datasets and benchmarks
Submission Number: 18967
Loading