Gamayun's Path to Multilingual Mastery: Cost-Efficient Training of a 1.5B-Parameter LLM
By: Alexander Podolskiy , Semen Molokov , Timofey Gerasin and more
Potential Business Impact:
Helps computers understand many languages, especially Russian.
We present Gamayun, a 1.5B-parameter multilingual language model trained entirely from scratch on 2.5T tokens. Designed for efficiency and deployment in resource-constrained environments, Gamayun addresses the lack of research on small non-English-centric LLMs by adopting a novel two-stage pre-training strategy: balanced multilingual training for cross-lingual alignment, followed by high-quality English enrichment to transfer performance gains across languages. Our model supports 12 languages, with special focus on Russian. Despite a significantly smaller training budget than comparable models, Gamayun outperforms LLaMA3.2-1B (9T tokens) on all considered benchmarks, and surpasses Qwen2.5-1.5B (18T tokens) on a wide range of English and multilingual tasks. It matches or exceeds Qwen3 (36T tokens) on most tasks outside advanced STEM, achieving state-of-the-art results in Russian, including the MERA benchmark, among the models of comparable size (1-2B parameters).
Similar Papers
Multilingual Machine Translation with Open Large Language Models at Practical Scale: An Empirical Study
Computation and Language
Makes computers translate 28 languages perfectly.
MiniLingua: A Small Open-Source LLM for European Languages
Computation and Language
Makes AI understand many languages on your phone.
SLAM: Towards Efficient Multilingual Reasoning via Selective Language Alignment
Computation and Language
Helps computers understand and answer questions in many languages.