Beyond MCQ: An Open-Ended Arabic Cultural QA Benchmark with Dialect Variants
By: Hunzalah Hassan Bhatti, Firoj Alam
Potential Business Impact:
Helps computers understand different Arabic languages better.
Large Language Models (LLMs) are increasingly used to answer everyday questions, yet their performance on culturally grounded and dialectal content remains uneven across languages. We propose a comprehensive method that (i) translates Modern Standard Arabic (MSA) multiple-choice questions (MCQs) into English and several Arabic dialects, (ii) converts them into open-ended questions (OEQs), (iii) benchmarks a range of zero-shot and fine-tuned LLMs under both MCQ and OEQ settings, and (iv) generates chain-of-thought (CoT) rationales to fine-tune models for step-by-step reasoning. Using this method, we extend an existing dataset in which QAs are parallelly aligned across multiple language varieties, making it, to our knowledge, the first of its kind. We conduct extensive experiments with both open and closed models. Our findings show that (i) models underperform on Arabic dialects, revealing persistent gaps in culturally grounded and dialect-specific knowledge; (ii) Arabic-centric models perform well on MCQs but struggle with OEQs; and (iii) CoT improves judged correctness while yielding mixed n-gram-based metrics. The developed dataset will be publicly released to support further research on culturally and linguistically inclusive evaluation.
Similar Papers
From National Curricula to Cultural Awareness: Constructing Open-Ended Culture-Specific Question Answering Dataset
Computation and Language
Teaches computers Korean culture for better answers.
DialectalArabicMMLU: Benchmarking Dialectal Capabilities in Arabic and Multilingual Language Models
Computation and Language
Tests if computers understand different Arabic languages.
Afri-MCQA: Multimodal Cultural Question Answering for African Languages
Computation and Language
Helps computers understand African languages and cultures.