Score: 2

Evaluating Multilingual and Code-Switched Alignment in LLMs via Synthetic Natural Language Inference

Published: August 20, 2025 | arXiv ID: 2508.14735v1

By: Samir Abdaljalil , Erchin Serpedin , Khalid Qaraqe and more

Potential Business Impact:

Makes computers understand different languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are increasingly applied in multilingual contexts, yet their capacity for consistent, logically grounded alignment across languages remains underexplored. We present a controlled evaluation framework for multilingual natural language inference (NLI) that generates synthetic, logic-based premise-hypothesis pairs and translates them into a typologically diverse set of languages. This design enables precise control over semantic relations and allows testing in both monolingual and mixed-language (code-switched) conditions. Surprisingly, code-switching does not degrade, and can even improve, performance, suggesting that translation-induced lexical variation may serve as a regularization signal. We validate semantic preservation through embedding-based similarity analyses and cross-lingual alignment visualizations, confirming the fidelity of translated pairs. Our findings expose both the potential and the brittleness of current LLM cross-lingual reasoning, and identify code-switching as a promising lever for improving multilingual robustness. Code available at: https://github.com/KurbanIntelligenceLab/nli-stress-testing

Country of Origin
πŸ‡ΊπŸ‡Έ πŸ‡ΆπŸ‡¦ United States, Qatar

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language