Score: 1

Persian-Phi: Efficient Cross-Lingual Adaptation of Compact LLMs via Curriculum Learning

Published: December 8, 2025 | arXiv ID: 2512.07454v1

By: Amir Mohammad Akhlaghi , Amirhossein Shabani , Mostafa Abdolmaleki and more

Potential Business Impact:

Makes AI understand Persian with less computer power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The democratization of AI is currently hindered by the immense computational costs required to train Large Language Models (LLMs) for low-resource languages. This paper presents Persian-Phi, a 3.8B parameter model that challenges the assumption that robust multilingual capabilities require massive model sizes or multilingual baselines. We demonstrate how Microsoft Phi-3 Mini -- originally a monolingual English model -- can be effectively adapted to Persian through a novel, resource-efficient curriculum learning pipeline. Our approach employs a unique "warm-up" stage using bilingual narratives (Tiny Stories) to align embeddings prior to heavy training, followed by continual pretraining and instruction tuning via Parameter-Efficient Fine-Tuning (PEFT). Despite its compact size, Persian-Phi achieves competitive results on Open Persian LLM Leaderboard in HuggingFace. Our findings provide a validated, scalable framework for extending the reach of state-of-the-art LLMs to underrepresented languages with minimal hardware resources. The Persian-Phi model is publicly available at https://huggingface.co/amirakhlaghiqqq/PersianPhi.

Country of Origin
🇮🇷 Iran

Page Count
14 pages

Category
Computer Science:
Computation and Language