Score: 1

Exploring Data and Parameter Efficient Strategies for Arabic Dialect Identifications

Published: September 17, 2025 | arXiv ID: 2509.13775v1

By: Vani Kanjirangat, Ljiljana Dolamic, Fabio Rinaldi

Potential Business Impact:

Helps computers understand different Arabic languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This paper discusses our exploration of different data-efficient and parameter-efficient approaches to Arabic Dialect Identification (ADI). In particular, we investigate various soft-prompting strategies, including prefix-tuning, prompt-tuning, P-tuning, and P-tuning V2, as well as LoRA reparameterizations. For the data-efficient strategy, we analyze hard prompting with zero-shot and few-shot inferences to analyze the dialect identification capabilities of Large Language Models (LLMs). For the parameter-efficient PEFT approaches, we conducted our experiments using Arabic-specific encoder models on several major datasets. We also analyzed the n-shot inferences on open-source decoder-only models, a general multilingual model (Phi-3.5), and an Arabic-specific one(SILMA). We observed that the LLMs generally struggle to differentiate the dialectal nuances in the few-shot or zero-shot setups. The soft-prompted encoder variants perform better, while the LoRA-based fine-tuned models perform best, even surpassing full fine-tuning.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language