Diagnosing and Addressing Pitfalls in KG-RAG Datasets: Toward More Reliable Benchmarking
By: Liangliang Zhang , Zhuorui Jiang , Hongliang Chi and more
Potential Business Impact:
Makes computer questions more accurate and harder.
Knowledge Graph Question Answering (KGQA) systems rely on high-quality benchmarks to evaluate complex multi-hop reasoning. However, despite their widespread use, popular datasets such as WebQSP and CWQ suffer from critical quality issues, including inaccurate or incomplete ground-truth annotations, poorly constructed questions that are ambiguous, trivial, or unanswerable, and outdated or inconsistent knowledge. Through a manual audit of 16 popular KGQA datasets, including WebQSP and CWQ, we find that the average factual correctness rate is only 57 %. To address these issues, we introduce KGQAGen, an LLM-in-the-loop framework that systematically resolves these pitfalls. KGQAGen combines structured knowledge grounding, LLM-guided generation, and symbolic verification to produce challenging and verifiable QA instances. Using KGQAGen, we construct KGQAGen-10k, a ten-thousand scale benchmark grounded in Wikidata, and evaluate a diverse set of KG-RAG models. Experimental results demonstrate that even state-of-the-art systems struggle on this benchmark, highlighting its ability to expose limitations of existing models. Our findings advocate for more rigorous benchmark construction and position KGQAGen as a scalable framework for advancing KGQA evaluation.
Similar Papers
KG-QAGen: A Knowledge-Graph-Based Framework for Systematic Question Generation and Long-Context LLM Evaluation
Computation and Language
Tests how well computers understand long, complex stories.
KERAG: Knowledge-Enhanced Retrieval-Augmented Generation for Advanced Question Answering
Computation and Language
Helps AI answer questions more accurately using more facts.
KAQG: A Knowledge-Graph-Enhanced RAG for Difficulty-Controlled Question Generation
Information Retrieval
Creates smarter AI that can explain its thinking.