Score: 1

When the Gold Standard isn't Necessarily Standard: Challenges of Evaluating the Translation of User-Generated Content

Published: December 19, 2025 | arXiv ID: 2512.17738v1

By: Lydia Nishimwe, Benoît Sagot, Rachel Bawden

Potential Business Impact:

Helps computers translate messy online text better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

User-generated content (UGC) is characterised by frequent use of non-standard language, from spelling errors to expressive choices such as slang, character repetitions, and emojis. This makes evaluating UGC translation particularly challenging: what counts as a "good" translation depends on the level of standardness desired in the output. To explore this, we examine the human translation guidelines of four UGC datasets, and derive a taxonomy of twelve non-standard phenomena and five translation actions (NORMALISE, COPY, TRANSFER, OMIT, CENSOR). Our analysis reveals notable differences in how UGC is treated, resulting in a spectrum of standardness in reference translations. Through a case study on large language models (LLMs), we show that translation scores are highly sensitive to prompts with explicit translation instructions for UGC, and that they improve when these align with the dataset's guidelines. We argue that when preserving UGC style is important, fair evaluation requires both models and metrics to be aware of translation guidelines. Finally, we call for clear guidelines during dataset creation and for the development of controllable, guideline-aware evaluation frameworks for UGC translation.


Page Count
19 pages

Category
Computer Science:
Computation and Language