Towards Open Foundation Language Model and Corpus for Macedonian: A Low-Resource Language
By: Stefan Krsteski , Matea Tashkovska , Borjan Sazdov and more
Potential Business Impact:
Helps computers understand Macedonian language better.
The increase in technological adoption worldwide comes with demands for novel tools to be used by the general population. Large Language Models (LLMs) provide a great opportunity in this respect, but their capabilities remain limited for low-resource languages, restricting applications in countries where such languages are spoken. We create several resources to facilitate the adoption of LLMs and to support research advancements for Macedonian. We collect the largest Macedonian corpus to date, consisting of 40GB of textual data and totaling 3.5B words. To support conversational applications, we collect a 106k-instance instruction dataset, carefully built to be culturally grounded. For evaluation, we construct a Macedonian evaluation suite covering seven benchmarks. Finally, we train domestic-yak, a state-of-the-art 8B-parameter model, on our curated datasets and evaluate it against eight baseline models using the newly constructed benchmark suite. Our model outperforms all existing models in the 8B parameter range across all benchmarks, and achieves performance comparable to models up to 10x larger. Furthermore, a qualitative analysis with native speakers reveals that our model is preferred over larger counterparts, receiving higher ratings for grammatical correctness and cultural appropriateness. All datasets, code, and model weights are openly released, setting a foundation for advancing LLMs in similarly underrepresented languages. These resources are publicly available at github.com/LVSTCK for source code, and at huggingface.co/LVSTCK for pretrained model weights and data.
Similar Papers
Vuyko Mistral: Adapting LLMs for Low-Resource Dialectal Translation
Computation and Language
Teaches computers to understand a rare Ukrainian language.
Lugha-Llama: Adapting Large Language Models for African Languages
Computation and Language
Teaches computers to understand African languages better.
Exploring NLP Benchmarks in an Extremely Low-Resource Setting
Computation and Language
Helps computers understand rare languages better.