UniRel-R1: RL-tuned LLM Reasoning for Knowledge Graph Relational Question Answering
By: Yinxu Tang , Chengsong Huang , Jiaxin Huang and more
Knowledge Graph Question Answering (KGQA) has traditionally focused on entity-centric queries that return a single answer entity. However, real-world queries are often relational, seeking to understand how entities are associated. In this work, we introduce relation-centric KGQA, a complementary setting where the answer is a subgraph capturing the semantic connections among entities rather than an individual entity. The main challenge lies in the abundance of candidate subgraphs, where trivial or overly common connections often obscure the identification of unique and informative answers. To tackle this, we propose UniRel-R1, a unified framework that integrates subgraph selection, multi-stage graph pruning, and an LLM fine-tuned with reinforcement learning. The reward function is designed to encourage compact and specific subgraphs with more informative relations and lower-degree intermediate entities. Extensive experiments show that UniRel-R1 achieves significant gains in connectivity and reward over Vanilla baselines and generalizes effectively to unseen entities and relations.
Similar Papers
KBQA-R1: Reinforcing Large Language Models for Knowledge Base Question Answering
Computation and Language
Helps computers answer questions by checking facts.
Interpretable Question Answering with Knowledge Graphs
Computation and Language
Answers questions using a smart map of facts.
Knowledge Graph-extended Retrieval Augmented Generation for Question Answering
Machine Learning (CS)
AI answers questions better by using facts.