IndicGEC: Powerful Models, or a Measurement Mirage?
By: Sowmya Vajjala
Potential Business Impact:
Fixes grammar mistakes in Indian languages.
In this paper, we report the results of the TeamNRC's participation in the BHASHA-Task 1 Grammatical Error Correction shared task https://github.com/BHASHA-Workshop/IndicGEC2025/ for 5 Indian languages. Our approach, focusing on zero/few-shot prompting of language models of varying sizes (4B to large proprietary models) achieved a Rank 4 in Telugu and Rank 2 in Hindi with GLEU scores of 83.78 and 84.31 respectively. In this paper, we extend the experiments to the other three languages of the shared task - Tamil, Malayalam and Bangla, and take a closer look at the data quality and evaluation metric used. Our results primarily highlight the potential of small language models, and summarize the concerns related to creating good quality datasets and appropriate metrics for this task that are suitable for Indian language scripts.
Similar Papers
"When Data is Scarce, Prompt Smarter"... Approaches to Grammatical Error Correction in Low-Resource Settings
Computation and Language
Fixes grammar mistakes in many languages.
Minimal-Edit Instruction Tuning for Low-Resource Indic GEC
Computation and Language
Fixes grammar mistakes in Indian languages.
From Phonemes to Meaning: Evaluating Large Language Models on Tamil
Computation and Language
Tests computers on Tamil language understanding.