Score: 2

NorwAI's Large Language Models: Technical Report

Published: January 6, 2026 | arXiv ID: 2601.03034v1

By: Jon Atle Gulla, Peng Liu, Lemei Zhang

Potential Business Impact:

Makes computers understand and talk Norwegian better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Norwegian, spoken by approximately five million people, remains underrepresented in many of the most significant breakthroughs in Natural Language Processing (NLP). To address this gap, the NorLLM team at NorwAI has developed a family of models specifically tailored to Norwegian and other Scandinavian languages, building on diverse Transformer-based architectures such as GPT, Mistral, Llama2, Mixtral and Magistral. These models are either pretrained from scratch or continually pretrained on 25B - 88.45B tokens, using a Norwegian-extended tokenizer and advanced post-training strategies to optimize performance, enhance robustness, and improve adaptability across various real-world tasks. Notably, instruction-tuned variants (e.g., Mistral-7B-Instruct and Mixtral-8x7B-Instruct) showcase strong assistant-style capabilities, underscoring their potential for practical deployment in interactive and domain-specific applications. The NorwAI large language models are openly available to Nordic organizations, companies and students for both research and experimental use. This report provides detailed documentation of the model architectures, training data, tokenizer design, fine-tuning strategies, deployment, and evaluations.

Country of Origin
🇳🇴 Norway


Page Count
15 pages

Category
Computer Science:
Computation and Language