CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics

ACL ARR 2026 January Submission8014 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-to-SQL, Domain-Specific Benchmarking, Multilingual NLP, Sports Analytics, Large Language Models
Abstract: Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally. Enthusiasts and analysts frequently seek advanced statistical insights such as long-term historical performance trends or complex player comparisons that are often unavailable through standard web searches. While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored. To investigate this potential capability gap, we present **CricBench**, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data. To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness. Recognizing linguistic diversity, we construct the benchmark in both English and Hindi, establishing a framework that is open for further extension to other regional languages. We evaluate six state-of-the-art models including GPT-4o, Claude 3.7 Sonnet, and open-source models using a strict evaluation protocol. Our results reveal that high performance on general benchmarks does not guarantee success in specialized domains. While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6\%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7\%) and GPT-4o (33.7\%), it still exhibits a significant accuracy drop when moving from general benchmarks (BIRD) to CricBench. Furthermore, we observe that code-mixed Hindi queries frequently yield parity or higher accuracy compared to English, challenging the assumption that English is the optimal prompt language for specialized SQL tasks.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Text-to-SQL, Benchmark Creation, Multilingual Resources, Domain-Specific Evaluation, Semantic Parsing, Database Query Generation, Low-Resource Languages, Code-Mixing, Evaluation Metrics, Dataset Curation
Contribution Types: NLP engineering experiment, Data resources, Data analysis
Languages Studied: English, Hindi
Submission Number: 8014
Loading