The State of Large Language Models for African Languages: Progress and Challenges
By: Kedir Yassin Hussen , Walelign Tewabe Sewunetie , Abinew Ali Ayele and more
Potential Business Impact:
Helps computers understand more African languages.
Large Language Models (LLMs) are transforming Natural Language Processing (NLP), but their benefits are largely absent for Africa's 2,000 low-resource languages. This paper comparatively analyzes African language coverage across six LLMs, eight Small Language Models (SLMs), and six Specialized SLMs (SSLMs). The evaluation covers language coverage, training sets, technical limitations, script problems, and language modelling roadmaps. The work identifies 42 supported African languages and 23 available public data sets, and it shows a big gap where four languages (Amharic, Swahili, Afrikaans, and Malagasy) are always treated while there is over 98\% of unsupported African languages. Moreover, the review shows that just Latin, Arabic, and Ge'ez scripts are identified while 20 active scripts are neglected. Some of the primary challenges are lack of data, tokenization biases, computational costs being very high, and evaluation issues. These issues demand language standardization, corpus development by the community, and effective adaptation methods for African languages.
Similar Papers
Lugha-Llama: Adapting Large Language Models for African Languages
Computation and Language
Teaches computers to understand African languages better.
Where Are We? Evaluating LLM Performance on African Languages
Computation and Language
Helps computers understand African languages better.
The African Languages Lab: A Collaborative Approach to Advancing Low-Resource African NLP
Computation and Language
Helps computers understand many African languages.