Hacking Neural Evaluation Metrics with Single Hub Text
By: Hiroyuki Deguchi, Katsuki Chousa, Yusuke Sakai
Potential Business Impact:
Finds bad translations that fool smart checkers.
Strongly human-correlated evaluation metrics serve as an essential compass for the development and improvement of generation models and must be highly reliable and robust. Recent embedding-based neural text evaluation metrics, such as COMET for translation tasks, are widely used in both research and development fields. However, there is no guarantee that they yield reliable evaluation results due to the black-box nature of neural networks. To raise concerns about the reliability and safety of such metrics, we propose a method for finding a single adversarial text in the discrete space that is consistently evaluated as high-quality, regardless of the test cases, to identify the vulnerabilities in evaluation metrics. The single hub text found with our method achieved 79.1 COMET% and 67.8 COMET% in the WMT'24 English-to-Japanese (En--Ja) and English-to-German (En--De) translation tasks, respectively, outperforming translations generated individually for each source sentence by using M2M100, a general translation model. Furthermore, we also confirmed that the hub text found with our method generalizes across multiple language pairs such as Ja--En and De--En.
Similar Papers
COMET-poly: Machine Translation Metric Grounded in Other Candidates
Computation and Language
Makes computer translations better by checking more options.
The illusion of a perfect metric: Why evaluating AI's words is harder than it looks
Computation and Language
Helps AI write better by checking its work.
Crosslingual Optimized Metric for Translation Assessment of Indian Languages
Computation and Language
Helps computers judge Indian language translations better.