Score: 0

Towards Nepali-language LLMs: Efficient GPT training with a Nepali BPE tokenizer

Published: December 16, 2025 | arXiv ID: 2512.14585v1

By: Adarsha Shrestha , Basanta Pokharel , Binit Shrestha and more

Potential Business Impact:

Helps computers write Nepali news stories.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Nepali, a low-resource language spoken by over 32 million people, continues to face challenges in natural language processing (NLP) due to its complex grammar, agglutinative morphology, and limited availability of high-quality corpora. Most efforts to date have centered on basic encoder architectures; they remain insufficient for Nepali-specific text generation. This study presents a GPT-2-based Nepali language model trained using several training strategies inspired by GPT-3, including optimized learning rate schedules, batch scaling, and architectural refinements. A custom 16k Byte-Pair Encoding (BPE) tokenizer was trained exclusively on Nepali text to ensure more consistent segmentation and improved input representation. The model was pretrained on a combined dataset comprising a 10.75GB cleaned NepBERTa corpus and additional web-scraped Nepali news articles. FlashAttention was integrated to reduce memory usage and stabilize training. After two epochs, the model achieved a training loss of 3.168177, a validation loss of 3.081982, and a final perplexity of 21.80, demonstrating its capability to generate coherent Nepali news-style text.

Page Count
6 pages

Category
Computer Science:
Computation and Language