NorEval: A Norwegian Language Understanding and Generation Evaluation Benchmark
By: Vladislav Mikhailov , Tita Enstad , David Samuel and more
Potential Business Impact:
Tests how well computers understand Norwegian.
This paper introduces NorEval, a new and comprehensive evaluation suite for large-scale standardized benchmarking of Norwegian generative language models (LMs). NorEval consists of 24 high-quality human-created datasets -- of which five are created from scratch. In contrast to existing benchmarks for Norwegian, NorEval covers a broad spectrum of task categories targeting Norwegian language understanding and generation, establishes human baselines, and focuses on both of the official written standards of the Norwegian language: Bokm{\aa}l and Nynorsk. All our datasets and a collection of over 100 human-written prompts are integrated into LM Evaluation Harness, ensuring flexible and reproducible evaluation. We describe the NorEval design and present the results of benchmarking 19 open-source pre-trained and instruction-tuned LMs for Norwegian in various scenarios. Our benchmark, evaluation framework, and annotation materials are publicly available.
Similar Papers
HypoEval: Hypothesis-Guided Evaluation for Natural Language Generation
Computation and Language
Helps computers judge writing better with less help.
GlotEval: A Test Suite for Massively Multilingual Evaluation of Large Language Models
Computation and Language
Tests computer language skills in many languages.
OneEval: Benchmarking LLM Knowledge-intensive Reasoning over Diverse Knowledge Bases
Computation and Language
Tests computers on using facts and rules.