MultiWikiQA: A Reading Comprehension Benchmark in 300+ Languages
By: Dan Saattrup Smart
Potential Business Impact:
Helps computers understand text in many languages.
We introduce a new reading comprehension dataset, dubbed MultiWikiQA, which covers 306 languages. The context data comes from Wikipedia articles, with questions generated by an LLM and the answers appearing verbatim in the Wikipedia articles. We conduct a crowdsourced human evaluation of the fluency of the generated questions across 30 of the languages, providing evidence that the questions are of good quality. We evaluate 6 different language models, both decoder and encoder models of varying sizes, showing that the benchmark is sufficiently difficult and that there is a large performance discrepancy amongst the languages. The dataset and survey evaluations are freely available.
Similar Papers
MultiWikiQA: A Reading Comprehension Benchmark in 300+ Languages
Computation and Language
Helps computers understand Wikipedia in many languages.
WikiMixQA: A Multimodal Benchmark for Question Answering over Tables and Charts
Computation and Language
Helps computers understand complex charts and tables.
XLQA: A Benchmark for Locale-Aware Multilingual Open-Domain Question Answering
Computation and Language
Tests AI on questions with different cultures.