Zero-shot Graph Reasoning via Retrieval Augmented Framework with LLMs
By: Hanqing Li , Kiran Sheena Jyothi , Henry Liang and more
Potential Business Impact:
Helps computers answer questions about complex connections.
We propose a new, training-free method, Graph Reasoning via Retrieval Augmented Framework (GRRAF), that harnesses retrieval-augmented generation (RAG) alongside the code-generation capabilities of large language models (LLMs) to address a wide range of graph reasoning tasks. In GRRAF, the target graph is stored in a graph database, and the LLM is prompted to generate executable code queries that retrieve the necessary information. This approach circumvents the limitations of existing methods that require extensive finetuning or depend on predefined algorithms, and it incorporates an error feedback loop with a time-out mechanism to ensure both correctness and efficiency. Experimental evaluations on the GraphInstruct dataset reveal that GRRAF achieves 100% accuracy on most graph reasoning tasks, including cycle detection, bipartite graph checks, shortest path computation, and maximum flow, while maintaining consistent token costs regardless of graph sizes. Imperfect but still very high performance is observed on subgraph matching. Notably, GRRAF scales effectively to large graphs with up to 10,000 nodes.
Similar Papers
GRAIL:Learning to Interact with Large Knowledge Graphs for Retrieval Augmented Reasoning
Artificial Intelligence
Helps computers answer questions using connected facts.
GraphRAFT: Retrieval Augmented Fine-Tuning for Knowledge Graphs on Graph Databases
Machine Learning (CS)
Helps AI answer questions using private data safely.
GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models
Machine Learning (CS)
Helps AI answer questions by learning from connected facts.