MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch
By: Nikolay Banar , Ehsan Lotfi , Jens Van Nooten and more
Potential Business Impact:
Helps computers understand Dutch better.
Recently, embedding resources, including models, benchmarks, and datasets, have been widely released to support a variety of languages. However, the Dutch language remains underrepresented, typically comprising only a small fraction of the published multilingual resources. To address this gap and encourage the further development of Dutch embeddings, we introduce new resources for their evaluation and generation. First, we introduce the Massive Text Embedding Benchmark for Dutch (MTEB-NL), which includes both existing Dutch datasets and newly created ones, covering a wide range of tasks. Second, we provide a training dataset compiled from available Dutch retrieval datasets, complemented with synthetic data generated by large language models to expand task coverage beyond retrieval. Finally, we release a series of E5-NL models compact yet efficient embedding models that demonstrate strong performance across multiple tasks. We make our resources publicly available through the Hugging Face Hub and the MTEB package.
Similar Papers
AfriMTEB and AfriE5: Benchmarking and Adapting Text Embedding Models for African Languages
Computation and Language
Helps computers understand African languages better.
MIEB: Massive Image Embedding Benchmark
CV and Pattern Recognition
Tests how well computers understand pictures and words.
PatenTEB: A Comprehensive Benchmark and Model Family for Patent Text Embedding
Computation and Language
Finds patents faster and better.