Crosslingual Optimized Metric for Translation Assessment of Indian Languages
By: Arafat Ahsan , Vandan Mujadia , Pruthwik Mishra and more
Potential Business Impact:
Helps computers judge Indian language translations better.
Automatic evaluation of translation remains a challenging task owing to the orthographic, morphological, syntactic and semantic richness and divergence observed across languages. String-based metrics such as BLEU have previously been extensively used for automatic evaluation tasks, but their limitations are now increasingly recognized. Although learned neural metrics have helped mitigate some of the limitations of string-based approaches, they remain constrained by a paucity of gold evaluation data in most languages beyond the usual high-resource pairs. In this present work we address some of these gaps. We create a large human evaluation ratings dataset for 13 Indian languages covering 21 translation directions and then train a neural translation evaluation metric named Cross-lingual Optimized Metric for Translation Assessment of Indian Languages (COMTAIL) on this dataset. The best performing metric variants show significant performance gains over previous state-of-the-art when adjudging translation pairs with at least one Indian language. Furthermore, we conduct a series of ablation studies to highlight the sensitivities of such a metric to changes in domain, translation quality, and language groupings. We release both the COMTAIL dataset and the accompanying metric models.
Similar Papers
Revisiting Metric Reliability for Fine-grained Evaluation of Machine Translation and Summarization in Indian Languages
Computation and Language
Helps computers translate Indian languages better.
A Critical Study of Automatic Evaluation in Sign Language Translation
Computation and Language
Helps computers judge sign language videos better.
COMET-poly: Machine Translation Metric Grounded in Other Candidates
Computation and Language
Makes computer translations better by checking more options.