KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering
By: Xin Sun , Zhongqi Chen , Xing Zheng and more
Potential Business Impact:
Helps computers answer questions by checking facts.
Knowledge Base Question Answering (KBQA) challenges models to bridge the gap between natural language and strict knowledge graph schemas by generating executable logical forms. While Large Language Models (LLMs) have advanced this field, current approaches often struggle with a dichotomy of failure: they either generate hallucinated queries without verifying schema existence or exhibit rigid, template-based reasoning that mimics synthesized traces without true comprehension of the environment. To address these limitations, we present \textbf{KBQA-R1}, a framework that shifts the paradigm from text imitation to interaction optimization via Reinforcement Learning. Treating KBQA as a multi-turn decision process, our model learns to navigate the knowledge base using a list of actions, leveraging Group Relative Policy Optimization (GRPO) to refine its strategies based on concrete execution feedback rather than static supervision. Furthermore, we introduce \textbf{Referenced Rejection Sampling (RRS)}, a data synthesis method that resolves cold-start challenges by strictly aligning reasoning traces with ground-truth action sequences. Extensive experiments on WebQSP, GrailQA, and GraphQuestions demonstrate that KBQA-R1 achieves state-of-the-art performance, effectively grounding LLM reasoning in verifiable execution.
Similar Papers
KGQuest: Template-Driven QA Generation from Knowledge Graphs with LLM-Based Refinement
Computation and Language
Creates smart questions and answers from facts.
Large Language Models Meet Knowledge Graphs for Question Answering: Synthesis and Opportunities
Computation and Language
Helps computers answer hard questions better.
Reinforcement Learning Enhanced Multi-hop Reasoning for Temporal Knowledge Question Answering
Artificial Intelligence
Helps computers answer questions with time travel.