Form and Meaning in Intrinsic Multilingual Evaluations
By: Wessel Poelman, Miryam de Lhoneux
Intrinsic evaluation metrics for conditional language models, such as perplexity or bits-per-character, are widely used in both mono- and multilingual settings. These metrics are rather straightforward to use and compare in monolingual setups, but rest on a number of assumptions in multilingual setups. One such assumption is that comparing the perplexity of CLMs on parallel sentences is indicative of their quality since the information content (here understood as the semantic meaning) is the same. However, the metrics are inherently measuring information content in the information-theoretic sense. We make this and other such assumptions explicit and discuss their implications. We perform experiments with six metrics on two multi-parallel corpora both with mono- and multilingual models. Ultimately, we find that current metrics are not universally comparable. We look at the form-meaning debate to provide some explanation for this.
Similar Papers
Quantifying Language Disparities in Multilingual Large Language Models
Computation and Language
Tests computer language fairness better, especially for rare languages.
Found in Translation: Measuring Multilingual LLM Consistency as Simple as Translate then Evaluate
Computation and Language
Finds AI struggles with different languages.
A Parallel Cross-Lingual Benchmark for Multimodal Idiomaticity Understanding
Computation and Language
Helps computers understand tricky sayings in different languages.