Vuyko Mistral: Adapting LLMs for Low-Resource Dialectal Translation
By: Roman Kyslyi, Yuliia Maksymiuk, Ihor Pysmennyi
Potential Business Impact:
Teaches computers to understand a rare Ukrainian language.
In this paper we introduce the first effort to adapt large language models (LLMs) to the Ukrainian dialect (in our case Hutsul), a low-resource and morphologically complex dialect spoken in the Carpathian Highlands. We created a parallel corpus of 9852 dialect-to-standard Ukrainian sentence pairs and a dictionary of 7320 dialectal word mappings. We also addressed data shortage by proposing an advanced Retrieval-Augmented Generation (RAG) pipeline to generate synthetic parallel translation pairs, expanding the corpus with 52142 examples. We have fine-tuned multiple open-source LLMs using LoRA and evaluated them on a standard-to-dialect translation task, also comparing with few-shot GPT-4o translation. In the absence of human annotators, we adopt a multi-metric evaluation strategy combining BLEU, chrF++, TER, and LLM-based judgment (GPT-4o). The results show that even small(7B) finetuned models outperform zero-shot baselines such as GPT-4o across both automatic and LLM-evaluated metrics. All data, models, and code are publicly released at: https://github.com/woters/vuyko-hutsul
Similar Papers
Towards Open Foundation Language Model and Corpus for Macedonian: A Low-Resource Language
Computation and Language
Helps computers understand Macedonian language better.
Lugha-Llama: Adapting Large Language Models for African Languages
Computation and Language
Teaches computers to understand African languages better.
LLM-Based Evaluation of Low-Resource Machine Translation: A Reference-less Dialect Guided Approach with a Refined Sylheti-English Benchmark
Computation and Language
Helps computers translate languages with many dialects.