COMET-poly: Machine Translation Metric Grounded in Other Candidates
By: Maike Züfle , Vilém Zouhar , Tu Anh Dinh and more
Potential Business Impact:
Makes computer translations better by checking more options.
Automated metrics for machine translation attempt to replicate human judgment. Unlike humans, who often assess a translation in the context of multiple alternatives, these metrics typically consider only the source sentence and a single translation. This discrepancy in the evaluation setup may negatively impact the performance of automated metrics. We propose two automated metrics that incorporate additional information beyond the single translation. COMET-polycand uses alternative translations of the same source sentence to compare and contrast with the translation at hand, thereby providing a more informed assessment of its quality. COMET-polyic, inspired by retrieval-based in-context learning, takes in translations of similar source texts along with their human-labeled quality scores to guide the evaluation. We find that including a single additional translation in COMET-polycand improves the segment-level metric performance (0.079 to 0.118 Kendall's tau-b correlation), with further gains when more translations are added. Incorporating retrieved examples in COMET-polyic yields similar improvements (0.079 to 0.116 Kendall's tau-b correlation). We release our models publicly.
Similar Papers
Long-context Reference-based MT Quality Estimation
Computation and Language
Makes computer translations much better.
Crosslingual Optimized Metric for Translation Assessment of Indian Languages
Computation and Language
Helps computers judge Indian language translations better.
SSA-COMET: Do LLMs Outperform Learned Metrics in Evaluating MT for Under-Resourced African Languages?
Computation and Language
Improves translation for African languages.