Score: 2

AfriMTEB and AfriE5: Benchmarking and Adapting Text Embedding Models for African Languages

Published: October 27, 2025 | arXiv ID: 2510.23896v1

By: Kosei Uemura, Miaoran Zhang, David Ifeoluwa Adelani

Potential Business Impact:

Helps computers understand African languages better.

Business Areas:
Text Analytics Data and Analytics, Software

Text embeddings are an essential building component of several NLP tasks such as retrieval-augmented generation which is crucial for preventing hallucinations in LLMs. Despite the recent release of massively multilingual MTEB (MMTEB), African languages remain underrepresented, with existing tasks often repurposed from translation benchmarks such as FLORES clustering or SIB-200. In this paper, we introduce AfriMTEB -- a regional expansion of MMTEB covering 59 languages, 14 tasks, and 38 datasets, including six newly added datasets. Unlike many MMTEB datasets that include fewer than five languages, the new additions span 14 to 56 African languages and introduce entirely new tasks, such as hate speech detection, intent detection, and emotion classification, which were not previously covered. Complementing this, we present AfriE5, an adaptation of the instruction-tuned mE5 model to African languages through cross-lingual contrastive distillation. Our evaluation shows that AfriE5 achieves state-of-the-art performance, outperforming strong baselines such as Gemini-Embeddings and mE5.

Country of Origin
🇨🇦 🇩🇪 Canada, Germany

Page Count
15 pages

Category
Computer Science:
Computation and Language