Score: 1

A Gamified Evaluation and Recruitment Platform for Low Resource Language Machine Translation Systems

Published: June 13, 2025 | arXiv ID: 2506.11467v1

By: Carlos Rafael Catalan

BigTech Affiliations: Samsung

Potential Business Impact:

Helps translate rare languages better with games.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Human evaluators provide necessary contributions in evaluating large language models. In the context of Machine Translation (MT) systems for low-resource languages (LRLs), this is made even more apparent since popular automated metrics tend to be string-based, and therefore do not provide a full picture of the nuances of the behavior of the system. Human evaluators, when equipped with the necessary expertise of the language, will be able to test for adequacy, fluency, and other important metrics. However, the low resource nature of the language means that both datasets and evaluators are in short supply. This presents the following conundrum: How can developers of MT systems for these LRLs find adequate human evaluators and datasets? This paper first presents a comprehensive review of existing evaluation procedures, with the objective of producing a design proposal for a platform that addresses the resource gap in terms of datasets and evaluators in developing MT systems. The result is a design for a recruitment and gamified evaluation platform for developers of MT systems. Challenges are also discussed in terms of evaluating this platform, as well as its possible applications in the wider scope of Natural Language Processing (NLP) research.

Country of Origin
🇰🇷 South Korea

Page Count
7 pages

Category
Computer Science:
Computation and Language