Context-Aware Monolingual Human Evaluation of Machine Translation
By: Silvio Picinini, Sheila Castilho
Potential Business Impact:
Lets people check translations without the original text.
This paper explores the potential of context-aware monolingual human evaluation for assessing machine translation (MT) when no source is given for reference. To this end, we compare monolingual with bilingual evaluations (with source text), under two scenarios: the evaluation of a single MT system, and the comparative evaluation of pairwise MT systems. Four professional translators performed both monolingual and bilingual evaluations by assigning ratings and annotating errors, and providing feedback on their experience. Our findings suggest that context-aware monolingual human evaluation achieves comparable outcomes to human bilingual evaluations, and suggest the feasibility and potential of monolingual evaluation as an efficient approach to assessing MT.
Similar Papers
An Interdisciplinary Approach to Human-Centered Machine Translation
Computation and Language
Makes computer translations more helpful for everyone.
Contextual Cues in Machine Translation: Investigating the Potential of Multi-Source Input Strategies in LLMs and NMT Systems
Computation and Language
Improves computer translations by adding extra clues.
Déjà Vu: Multilingual LLM Evaluation through the Lens of Machine Translation Evaluation
Computation and Language
Tests AI language skills better for smarter tools.