Score: 0

A Parallel Cross-Lingual Benchmark for Multimodal Idiomaticity Understanding

Published: January 13, 2026 | arXiv ID: 2601.08645v1

By: Dilara Torunoğlu-Selamet , Dogukan Arslan , Rodrigo Wilkens and more

Potential Business Impact:

Helps computers understand tricky sayings in different languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Potentially idiomatic expressions (PIEs) construe meanings inherently tied to the everyday experience of a given language community. As such, they constitute an interesting challenge for assessing the linguistic (and to some extent cultural) capabilities of NLP systems. In this paper, we present XMPIE, a parallel multilingual and multimodal dataset of potentially idiomatic expressions. The dataset, containing 34 languages and over ten thousand items, allows comparative analyses of idiomatic patterns among language-specific realisations and preferences in order to gather insights about shared cultural aspects. This parallel dataset allows to evaluate model performance for a given PIE in different languages and whether idiomatic understanding in one language can be transferred to another. Moreover, the dataset supports the study of PIEs across textual and visual modalities, to measure to what extent PIE understanding in one modality transfers or implies in understanding in another modality (text vs. image). The data was created by language experts, with both textual and visual components crafted under multilingual guidelines, and each PIE is accompanied by five images representing a spectrum from idiomatic to literal meanings, including semantically related and random distractors. The result is a high-quality benchmark for evaluating multilingual and multimodal idiomatic language understanding.

Page Count
13 pages

Category
Computer Science:
Computation and Language