AfriMTEB and AfriE5: Benchmarking and Adapting Text Embedding Models for African Languages
By: Kosei Uemura, Miaoran Zhang, David Ifeoluwa Adelani
Potential Business Impact:
Helps computers understand African languages better.
Text embeddings are an essential building component of several NLP tasks such as retrieval-augmented generation which is crucial for preventing hallucinations in LLMs. Despite the recent release of massively multilingual MTEB (MMTEB), African languages remain underrepresented, with existing tasks often repurposed from translation benchmarks such as FLORES clustering or SIB-200. In this paper, we introduce AfriMTEB -- a regional expansion of MMTEB covering 59 languages, 14 tasks, and 38 datasets, including six newly added datasets. Unlike many MMTEB datasets that include fewer than five languages, the new additions span 14 to 56 African languages and introduce entirely new tasks, such as hate speech detection, intent detection, and emotion classification, which were not previously covered. Complementing this, we present AfriE5, an adaptation of the instruction-tuned mE5 model to African languages through cross-lingual contrastive distillation. Our evaluation shows that AfriE5 achieves state-of-the-art performance, outperforming strong baselines such as Gemini-Embeddings and mE5.
Similar Papers
MTEB-NL and E5-NL: Embedding Benchmark and Models for Dutch
Computation and Language
Helps computers understand Dutch better.
AfriSpeech-MultiBench: A Verticalized Multidomain Multicountry Benchmark Suite for African Accented English ASR
Computation and Language
Tests voice tools for over 100 African accents.
AFRICAPTION: Establishing a New Paradigm for Image Captioning in African Languages
Computation and Language
Lets computers describe pictures in African languages.