What do the metrics mean? A critical analysis of the use of Automated Evaluation Metrics in Interpreting
By: Jonathan Downie, Joss Moorkens
Potential Business Impact:
Tests how good a translator is.
With the growth of interpreting technologies, from remote interpreting and Computer-Aided Interpreting to automated speech translation and interpreting avatars, there is now a high demand for ways to quickly and efficiently measure the quality of any interpreting delivered. A range of approaches to fulfil the need for quick and efficient quality measurement have been proposed, each involving some measure of automation. This article examines these recently-proposed quality measurement methods and will discuss their suitability for measuring the quality of authentic interpreting practice, whether delivered by humans or machines, concluding that automatic metrics as currently proposed cannot take into account the communicative context and thus are not viable measures of the quality of any interpreting provision when used on their own. Across all attempts to measure or even categorise quality in Interpreting Studies, the contexts in which interpreting takes place have become fundamental to the final analysis.
Similar Papers
A Critical Study of Automatic Evaluation in Sign Language Translation
Computation and Language
Helps computers judge sign language videos better.
AutoMetrics: Approximate Human Judgements with Automatically Generated Evaluators
Computation and Language
Tests AI tools faster with less human help.
Automatic Evaluation Metrics for Document-level Translation: Overview, Challenges and Trends
Computation and Language
Checks if computer translations are good.