AFRICAPTION: Establishing a New Paradigm for Image Captioning in African Languages
By: Mardiyyah Oduwole , Prince Mireku , Fatimo Adebanjo and more
Potential Business Impact:
Lets computers describe pictures in African languages.
Multimodal AI research has overwhelmingly focused on high-resource languages, hindering the democratization of advancements in the field. To address this, we present AfriCaption, a comprehensive framework for multilingual image captioning in 20 African languages and our contributions are threefold: (i) a curated dataset built on Flickr8k, featuring semantically aligned captions generated via a context-aware selection and translation process; (ii) a dynamic, context-preserving pipeline that ensures ongoing quality through model ensembling and adaptive substitution; and (iii) the AfriCaption model, a 0.5B parameter vision-to-text architecture that integrates SigLIP and NLLB200 for caption generation across under-represented languages. This unified framework ensures ongoing data quality and establishes the first scalable image-captioning resource for under-represented African languages, laying the groundwork for truly inclusive multimodal AI.
Similar Papers
Multilingual Training-Free Remote Sensing Image Captioning
CV and Pattern Recognition
Lets computers describe satellite pictures in any language.
The African Languages Lab: A Collaborative Approach to Advancing Low-Resource African NLP
Computation and Language
Helps computers understand many African languages.
AfriMTEB and AfriE5: Benchmarking and Adapting Text Embedding Models for African Languages
Computation and Language
Helps computers understand African languages better.