NorwAI's Large Language Models: Technical Report
By: Jon Atle Gulla, Peng Liu, Lemei Zhang
Potential Business Impact:
Makes computers understand and talk Norwegian better.
Norwegian, spoken by approximately five million people, remains underrepresented in many of the most significant breakthroughs in Natural Language Processing (NLP). To address this gap, the NorLLM team at NorwAI has developed a family of models specifically tailored to Norwegian and other Scandinavian languages, building on diverse Transformer-based architectures such as GPT, Mistral, Llama2, Mixtral and Magistral. These models are either pretrained from scratch or continually pretrained on 25B - 88.45B tokens, using a Norwegian-extended tokenizer and advanced post-training strategies to optimize performance, enhance robustness, and improve adaptability across various real-world tasks. Notably, instruction-tuned variants (e.g., Mistral-7B-Instruct and Mixtral-8x7B-Instruct) showcase strong assistant-style capabilities, underscoring their potential for practical deployment in interactive and domain-specific applications. The NorwAI large language models are openly available to Nordic organizations, companies and students for both research and experimental use. This report provides detailed documentation of the model architectures, training data, tokenizer design, fine-tuning strategies, deployment, and evaluations.
Similar Papers
Towards Multilingual LLM Evaluation for Baltic and Nordic languages: A study on Lithuanian History
Computation and Language
Computers understand history better in many languages.
Large Language Models and Arabic Content: A Review
Computation and Language
Helps computers understand and use Arabic language better.
Evaluating LLMs on Generating Age-Appropriate Child-Like Conversations
Computation and Language
Makes computers talk like young kids.