Score: 0

Hacking Neural Evaluation Metrics with Single Hub Text

Published: December 18, 2025 | arXiv ID: 2512.16323v1

By: Hiroyuki Deguchi, Katsuki Chousa, Yusuke Sakai

Potential Business Impact:

Finds bad translations that fool smart checkers.

Business Areas:
Text Analytics Data and Analytics, Software

Strongly human-correlated evaluation metrics serve as an essential compass for the development and improvement of generation models and must be highly reliable and robust. Recent embedding-based neural text evaluation metrics, such as COMET for translation tasks, are widely used in both research and development fields. However, there is no guarantee that they yield reliable evaluation results due to the black-box nature of neural networks. To raise concerns about the reliability and safety of such metrics, we propose a method for finding a single adversarial text in the discrete space that is consistently evaluated as high-quality, regardless of the test cases, to identify the vulnerabilities in evaluation metrics. The single hub text found with our method achieved 79.1 COMET% and 67.8 COMET% in the WMT'24 English-to-Japanese (En--Ja) and English-to-German (En--De) translation tasks, respectively, outperforming translations generated individually for each source sentence by using M2M100, a general translation model. Furthermore, we also confirmed that the hub text found with our method generalizes across multiple language pairs such as Ja--En and De--En.

Page Count
9 pages

Category
Computer Science:
Computation and Language