Score: 1

Do LLMs exhibit the same commonsense capabilities across languages?

Published: September 8, 2025 | arXiv ID: 2509.06401v1

By: Ivan Martínez-Murillo , Elena Lloret , Paloma Moreda and more

Potential Business Impact:

Computers understand and write stories in many languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper explores the multilingual commonsense generation abilities of Large Language Models (LLMs). To facilitate this investigation, we introduce MULTICOM, a novel benchmark that extends the COCOTEROS dataset to four languages: English, Spanish, Dutch, and Valencian. The task involves generating a commonsensical sentence that includes a given triplet of words. We evaluate a range of open-source LLMs, including LLaMA, Qwen, Gemma, EuroLLM, and Salamandra, on this benchmark. Our evaluation combines automatic metrics, LLM-as-a-judge approaches (using Prometheus and JudgeLM), and human annotations. Results consistently show superior performance in English, with significantly lower performance in less-resourced languages. While contextual support yields mixed results, it tends to benefit underrepresented languages. These findings underscore the current limitations of LLMs in multilingual commonsense generation. The dataset is publicly available at https://huggingface.co/datasets/gplsi/MULTICOM.

Page Count
18 pages

Category
Computer Science:
Computation and Language