Lost in Execution: On the Multilingual Robustness of Tool Calling in Large Language Models
By: Zheng Luo , T Pranav Kutralingam , Ogochukwu N Okoani and more
Potential Business Impact:
Fixes AI tools that misunderstand different languages.
Large Language Models (LLMs) are increasingly deployed as agents that invoke external tools through structured function calls. While recent work reports strong tool-calling performance under standard English-centric evaluations, the robustness of tool calling under multilingual user interactions remains underexplored. In this work, we introduce MLCL, a diagnostic benchmark, and conduct a systematic evaluation of multilingual tool calling across Chinese, Hindi, and the low-resource language Igbo. Through fine-grained error analysis, we show that many failures occur despite correct intent understanding and tool selection. We identify parameter value language mismatch as a dominant failure mode, where models generate semantically appropriate parameter values in the user's language, violating language-invariant execution conventions. We further evaluate several inference-time system strategies and find that while these strategies substantially reduce language-induced execution errors, none of them can fully recover English-level performance.
Similar Papers
Arabic Prompts with English Tools: A Benchmark
Artificial Intelligence
Tests AI's ability to use tools in Arabic.
Tool Calling for Arabic LLMs: Data Strategies and Instruction Tuning
Computation and Language
Helps AI understand Arabic tools for better use.
Alignment for Efficient Tool Calling of Large Language Models
Computation and Language
Helps computers know when to use tools.