CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics
By: Vaibhav Devraj, Dhruv Kumar, Jagat Sesh Challa
Potential Business Impact:
Helps computers answer tough cricket questions.
Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally. Enthusiasts and analysts frequently seek advanced statistical insights, such as long-term historical performance trends or complex player comparisons, that are often unavailable through standard web searches. While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored. To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data. To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness. Recognizing linguistic diversity, we construct the benchmark in both English and Hindi, establishing a framework that is open for further extension to other regional languages. We evaluate six state-of-the-art models, including GPT-4o, Claude 3.7 Sonnet, and open-source models, using a strict evaluation protocol. Our results reveal that high performance on general benchmarks does not guarantee success in specialized domains. While the open-weights reasoning model DeepSeek R1 achieves state-of-the-art performance (50.6%), surpassing proprietary giants like Claude 3.7 Sonnet (47.7%) and GPT-4o (33.7%), it still exhibits a significant accuracy drop when moving from general benchmarks (BIRD) to CricBench. Furthermore, we observe that code-mixed Hindi queries frequently yield parity or higher accuracy compared to English, challenging the assumption that English is the optimal prompt language for specialized SQL tasks.
Similar Papers
Benchmarking Hindi LLMs: A New Suite of Datasets and a Comparative Analysis
Computation and Language
Tests Hindi AI to understand language better.
Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs
CV and Pattern Recognition
Tests computers reading cricket scores in different languages.
Mind the (Language) Gap: Towards Probing Numerical and Cross-Lingual Limits of LVLMs
CV and Pattern Recognition
Helps computers understand cricket scores better.