Benchmarking Hindi LLMs: A New Suite of Datasets and a Comparative Analysis
By: Anusha Kamath , Kanishk Singla , Rakesh Paul and more
Potential Business Impact:
Tests Hindi AI to understand language better.
Evaluating instruction-tuned Large Language Models (LLMs) in Hindi is challenging due to a lack of high-quality benchmarks, as direct translation of English datasets fails to capture crucial linguistic and cultural nuances. To address this, we introduce a suite of five Hindi LLM evaluation datasets: IFEval-Hi, MT-Bench-Hi, GSM8K-Hi, ChatRAG-Hi, and BFCL-Hi. These were created using a methodology that combines from-scratch human annotation with a translate-and-verify process. We leverage this suite to conduct an extensive benchmarking of open-source LLMs supporting Hindi, providing a detailed comparative analysis of their current capabilities. Our curation process also serves as a replicable methodology for developing benchmarks in other low-resource languages.
Similar Papers
From Phonemes to Meaning: Evaluating Large Language Models on Tamil
Computation and Language
Tests computers on Tamil language understanding.
IberBench: LLM Evaluation on Iberian Languages
Computation and Language
Tests AI language skills in many Spanish-speaking countries.
Improving Multilingual Capabilities with Cultural and Local Knowledge in Large Language Models While Enhancing Native Performance
Computation and Language
Helps computers understand Hindi and English better.