Graph-R1: Incentivizing the Zero-Shot Graph Learning Capability in LLMs via Explicit Reasoning
By: Yicong Wu , Guangyue Lu , Yuan Zuo and more
Potential Business Impact:
Lets computers solve tricky problems by thinking step-by-step.
Generalizing to unseen graph tasks without task-pecific supervision remains challenging. Graph Neural Networks (GNNs) are limited by fixed label spaces, while Large Language Models (LLMs) lack structural inductive biases. Recent advances in Large Reasoning Models (LRMs) provide a zero-shot alternative via explicit, long chain-of-thought reasoning. Inspired by this, we propose a GNN-free approach that reformulates graph tasks--node classification, link prediction, and graph classification--as textual reasoning problems solved by LRMs. We introduce the first datasets with detailed reasoning traces for these tasks and develop Graph-R1, a reinforcement learning framework that leverages task-specific rethink templates to guide reasoning over linearized graphs. Experiments demonstrate that Graph-R1 outperforms state-of-the-art baselines in zero-shot settings, producing interpretable and effective predictions. Our work highlights the promise of explicit reasoning for graph learning and provides new resources for future research.
Similar Papers
Learn to Think: Bootstrapping LLM Reasoning Capability Through Graph Representation Learning
Machine Learning (CS)
Helps computers solve hard problems by thinking step-by-step.
Less is More: Learning Graph Tasks with Just LLMs
Machine Learning (CS)
Computers learn to solve problems using connected ideas.
Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs
Artificial Intelligence
Helps computers answer questions about complex connections.