Score: 0

KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering

Published: December 10, 2025 | arXiv ID: 2512.10999v1

By: Xin Sun , Zhongqi Chen , Xing Zheng and more

Potential Business Impact:

Helps computers answer questions by checking facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension of the environment. To address these limitations, we present \textbf{KBQA-R1}, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback rather than static supervision. Furthermore, we introduce \textbf{Referenced Rejection Sampling (RRS)}, a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences. Extensive experiments on WebQSP, GrailQA, and GraphQuestions demonstrate that KBQA-R1 achieves state-of-the-art performance, effectively grounding LLM reasoning in verifiable execution.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Computation and Language