DAGR: Decomposition Augmented Graph Retrieval with LLMs
By: Valentin Six, Evan Dufraisse, Gaël de Chalendar
Potential Business Impact:
Helps computers answer hard questions by breaking them down.
Large Language Models (LLMs) excel at many Natural Language Processing (NLP) tasks, but struggle with multi-hop reasoning and factual consistency, limiting their effectiveness on knowledge-intensive tasks like complex question answering (QA). Linking Knowledge Graphs (KG) and LLMs has shown promising results, but LLMs generally lack the ability to reason efficiently over graph-structured information. To address this challenge, we introduce DAGR, a retrieval method that leverages both complex questions and their decomposition in subquestions to extract relevant, linked textual subgraphs. DAGR first breaks down complex queries, retrieves subgraphs guided by a weighted similarity function over both the original and decomposed queries, and creates a question-specific knowledge graph to guide answer generation. The resulting Graph-RAG pipeline is suited to handle complex multi-hop questions and effectively reason over graph-structured data. We evaluate DAGR on standard multi-hop QA benchmarks and show that it achieves comparable or superior performance to competitive existing methods, using smaller models and fewer LLM calls.
Similar Papers
GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models
Machine Learning (CS)
Helps AI answer questions by learning from connected facts.
Knowledge Graph-extended Retrieval Augmented Generation for Question Answering
Machine Learning (CS)
AI answers questions better by using facts.
Graph-Augmented Reasoning: Evolving Step-by-Step Knowledge Graph Retrieval for LLM Reasoning
Artificial Intelligence
Helps small AI learn math better by finding facts.