KyrgyzBERT: A Compact, Efficient Language Model for Kyrgyz NLP
By: Adilet Metinov, Gulida M. Kudakeeva, Gulnara D. Kabaeva
Potential Business Impact:
Helps computers understand the Kyrgyz language better.
Kyrgyz remains a low-resource language with limited foundational NLP tools. To address this gap, we introduce KyrgyzBERT, the first publicly available monolingual BERT-based language model for Kyrgyz. The model has 35.9M parameters and uses a custom tokenizer designed for the language's morphological structure. To evaluate performance, we create kyrgyz-sst2, a sentiment analysis benchmark built by translating the Stanford Sentiment Treebank and manually annotating the full test set. KyrgyzBERT fine-tuned on this dataset achieves an F1-score of 0.8280, competitive with a fine-tuned mBERT model five times larger. All models, data, and code are released to support future research in Kyrgyz NLP.
Similar Papers
Towards Nepali-language LLMs: Efficient GPT training with a Nepali BPE tokenizer
Computation and Language
Helps computers write Nepali news stories.
Human-Annotated NER Dataset for the Kyrgyz Language
Computation and Language
Helps computers understand Kyrgyz words better.
KuBERT: Central Kurdish BERT Model and Its Application for Sentiment Analysis
Computation and Language
Helps computers understand feelings in Kurdish.