SpokenNativQA: Multilingual Everyday Spoken Queries for LLMs
By: Firoj Alam, Md Arid Hasan, Shammur Absar Chowdhury
Potential Business Impact:
Tests how well computers understand spoken questions.
Large Language Models (LLMs) have demonstrated remarkable performance across various disciplines and tasks. However, benchmarking their capabilities with multilingual spoken queries remains largely unexplored. In this study, we introduce SpokenNativQA, the first multilingual and culturally aligned spoken question-answering (SQA) dataset designed to evaluate LLMs in real-world conversational settings. The dataset comprises approximately 33,000 naturally spoken questions and answers in multiple languages, including low-resource and dialect-rich languages, providing a robust benchmark for assessing LLM performance in speech-based interactions. SpokenNativQA addresses the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. We benchmark different ASR systems and LLMs for SQA and present our findings. We released the data at (https://huggingface.co/datasets/QCRI/SpokenNativQA) and the experimental scripts at (https://llmebench.qcri.org/) for the research community.
Similar Papers
NativQA Framework: Enabling LLMs with Native, Local, and Everyday Knowledge
Computation and Language
Builds smart computer answers for any language.
Evaluating Large Language Model with Knowledge Oriented Language Specific Simple Question Answering
Computation and Language
Tests if AI knows facts in many languages.
IndicSQuAD: A Comprehensive Multilingual Question Answering Dataset for Indic Languages
Computation and Language
Helps computers answer questions in Indian languages.