Score: 1

Low-Resource Dialect Adaptation of Large Language Models: A French Dialect Case-Study

Published: October 26, 2025 | arXiv ID: 2510.22747v1

By: Eeham Khan , Firas Saidani , Owen Van Esbroeck and more

Potential Business Impact:

Helps computers understand rare languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite the widespread adoption of large language models (LLMs), their strongest capabilities remain largely confined to a small number of high-resource languages for which there is abundant training data. Recently, continual pre-training (CPT) has emerged as a means to fine-tune these models to low-resource regional dialects. In this paper, we study the use of CPT for dialect learning under tight data and compute budgets. Using low-rank adaptation (LoRA) and compute-efficient continual pre-training, we adapt three LLMs to the Qu\'ebec French dialect using a very small dataset and benchmark them on the COLE suite. Our experiments demonstrate an improvement on the minority dialect benchmarks with minimal regression on the prestige language benchmarks with under 1% of model parameters updated. Analysis of the results demonstrate that gains are highly contingent on corpus composition. These findings indicate that CPT with parameter-efficient fine-tuning (PEFT) can narrow the dialect gap by providing cost-effective and sustainable language resource creation, expanding high-quality LLM access to minority linguistic communities. We release the first Qu\'ebec French LLMs on HuggingFace.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
11 pages

Category
Computer Science:
Computation and Language