NativQA Framework: Enabling LLMs with Native, Local, and Everyday Knowledge
By: Firoj Alam , Md Arid Hasan , Sahinur Rahman Laskar and more
Potential Business Impact:
Builds smart computer answers for any language.
The rapid advancement of large language models (LLMs) has raised concerns about cultural bias, fairness, and their applicability in diverse linguistic and underrepresented regional contexts. To enhance and benchmark the capabilities of LLMs, there is a need to develop large-scale resources focused on multilingual, local, and cultural contexts. In this study, we propose the NativQA framework, which can seamlessly construct large-scale, culturally and regionally aligned QA datasets in native languages. The framework utilizes user-defined seed queries and leverages search engines to collect location-specific, everyday information. It has been evaluated across 39 locations in 24 countries and in 7 languages -- ranging from extremely low-resource to high-resource languages -- resulting in over 300K Question-Answer (QA) pairs. The developed resources can be used for LLM benchmarking and further fine-tuning. The framework has been made publicly available for the community (https://gitlab.com/nativqa/nativqa-framework).
Similar Papers
XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering
Computation and Language
Tests AI on questions with different cultures.
Do You Know About My Nation? Investigating Multilingual Language Models' Cultural Literacy Through Factual Knowledge
Computation and Language
Helps computers understand facts from many countries.
A Survey of Large Language Model Agents for Question Answering
Computation and Language
Lets computers answer questions by thinking.