Tool Calling for Arabic LLMs: Data Strategies and Instruction Tuning
By: Asim Ersoy , Enes Altinisik , Husrev Taha Sencar and more
Potential Business Impact:
Helps AI understand Arabic tools for better use.
Tool calling is a critical capability that allows Large Language Models (LLMs) to interact with external systems, significantly expanding their utility. However, research and resources for tool calling are predominantly English-centric, leaving a gap in our understanding of how to enable this functionality for other languages, such as Arabic. This paper investigates three key research questions: (1) the necessity of in-language (Arabic) tool-calling data versus relying on cross-lingual transfer, (2) the effect of general-purpose instruction tuning on tool-calling performance, and (3) the value of fine-tuning on specific, high-priority tools. To address these questions, we conduct extensive experiments using base and post-trained variants of an open-weight Arabic LLM. To enable this study, we bridge the resource gap by translating and adapting two open-source tool-calling datasets into Arabic. Our findings provide crucial insights into the optimal strategies for developing robust tool-augmented agents for Arabic.
Similar Papers
Arabic Prompts with English Tools: A Benchmark
Artificial Intelligence
Tests AI's ability to use tools in Arabic.
Lost in Execution: On the Multilingual Robustness of Tool Calling in Large Language Models
Computation and Language
Fixes AI tools that misunderstand different languages.
Tahakom LLM guidelines and receipts: from pre-training data to an Arabic LLM
Machine Learning (CS)
Helps computers understand and speak Arabic better.