CUS-QA: Local-Knowledge-Oriented Open-Ended Question Answering Dataset
By: Jindřich Libovický , Jindřich Helcl , Andrei Manea and more
Potential Business Impact:
Helps computers answer questions about places.
We introduce a benchmark for open-ended regional question answering that encompasses both textual and visual modalities. We also provide strong baselines using state-of-the-art large language models (LLMs). Our dataset consists of manually curated questions and answers grounded in Wikipedia, created by native speakers from Czechia, Slovakia, and Ukraine, with accompanying English translations. It includes both purely textual questions and those requiring visual understanding. As a baseline, we evaluate state-of-the-art LLMs through prompting and complement this with human judgments of answer correctness. Using these human evaluations, we analyze the reliability of existing automatic evaluation metrics. Our baseline results highlight a significant gap in regional knowledge among current LLMs. Moreover, apart from LLM-based evaluation, there is minimal correlation between automated metrics and human judgment. We release this dataset as a resource to (1) assess regional knowledge in LLMs, (2) study cross-lingual generation consistency in a challenging setting, and (3) advance the development of evaluation metrics for open-ended question answering.
Similar Papers
CUS-QA: Local-Knowledge-Oriented Open-Ended Question Answering Dataset
Computation and Language
Helps computers answer questions using text and pictures.
From National Curricula to Cultural Awareness: Constructing Open-Ended Culture-Specific Question Answering Dataset
Computation and Language
Teaches computers Korean culture for better answers.
MultiWikiQA: A Reading Comprehension Benchmark in 300+ Languages
Computation and Language
Helps computers understand text in many languages.