Score: 0

Lost in Execution: On the Multilingual Robustness of Tool Calling in Large Language Models

Published: January 8, 2026 | arXiv ID: 2601.05366v1

By: Zheng Luo , T Pranav Kutralingam , Ogochukwu N Okoani and more

Potential Business Impact:

Fixes AI tools that misunderstand different languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly deployed as agents that invoke external tools through structured function calls. While recent work reports strong tool-calling performance under standard English-centric evaluations, the robustness of tool calling under multilingual user interactions remains underexplored. In this work, we introduce MLCL, a diagnostic benchmark, and conduct a systematic evaluation of multilingual tool calling across Chinese, Hindi, and the low-resource language Igbo. Through fine-grained error analysis, we show that many failures occur despite correct intent understanding and tool selection. We identify parameter value language mismatch as a dominant failure mode, where models generate semantically appropriate parameter values in the user's language, violating language-invariant execution conventions. We further evaluate several inference-time system strategies and find that while these strategies substantially reduce language-induced execution errors, none of them can fully recover English-level performance.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
18 pages

Category
Computer Science:
Computation and Language