Evaluating Large Language Models on Multiword Expressions in Multilingual and Code-Switched Contexts
By: Frances Laureano De Leon, Harish Tayyar Madabushi, Mark G. Lee
Potential Business Impact:
Computers still struggle with tricky word meanings.
Multiword expressions, characterised by non-compositional meanings and syntactic irregularities, are an example of nuanced language. These expressions can be used literally or idiomatically, leading to significant changes in meaning. While large language models have demonstrated strong performance across many tasks, their ability to handle such linguistic subtleties remains uncertain. Therefore, this study evaluates how state-of-the-art language models process the ambiguity of potentially idiomatic multiword expressions, particularly in contexts that are less frequent, where models are less likely to rely on memorisation. By evaluating models across in Portuguese and Galician, in addition to English, and using a novel code-switched dataset and a novel task, we find that large language models, despite their strengths, struggle with nuanced language. In particular, we find that the latest models, including GPT-4, fail to outperform the xlm-roBERTa-base baselines in both detection and semantic tasks, with especially poor performance on the novel tasks we introduce, despite its similarity to existing tasks. Overall, our results demonstrate that multiword expressions, especially those which are ambiguous, continue to be a challenge to models.
Similar Papers
Multilingual Performance Biases of Large Language Models in Education
Computation and Language
Tests if computers help students learn other languages.
Evaluating Programming Language Confusion
Software Engineering
Fixes computer programs that accidentally switch languages.
Multilingual Definition Modeling
Computation and Language
Helps computers explain words in many languages.