From Query to Logic: Ontology-Driven Multi-Hop Reasoning in LLMs
By: Haonan Bian , Yutao Qi , Rui Yang and more
Potential Business Impact:
Helps computers answer tricky questions by thinking step-by-step.
Large Language Models (LLMs), despite their success in question answering, exhibit limitations in complex multi-hop question answering (MQA) tasks that necessitate non-linear, structured reasoning. This limitation stems from their inability to adequately capture deep conceptual relationships between entities. To overcome this challenge, we present **ORACLE** (**O**ntology-driven **R**easoning **A**nd **C**hain for **L**ogical **E**ucidation), a training-free framework that combines LLMs' generative capabilities with the structural benefits of knowledge graphs. Our approach operates through three stages: (1) dynamic construction of question-specific knowledge ontologies using LLMs, (2) transformation of these ontologies into First-Order Logic reasoning chains, and (3) systematic decomposition of the original query into logically coherent sub-questions. Experimental results on several standard MQA benchmarks show that our framework achieves highly competitive performance, rivaling current state-of-the-art models like DeepSeek-R1. Detailed analyses further confirm the effectiveness of each component, while demonstrating that our method generates more logical and interpretable reasoning chains than existing approaches.
Similar Papers
KG-o1: Enhancing Multi-hop Question Answering in Large Language Models via Knowledge Graph Integration
Computation and Language
Helps computers answer questions by connecting facts.
Improving LLM Reasoning via Dependency-Aware Query Decomposition and Logic-Parallel Content Expansion
Artificial Intelligence
Makes AI answer questions much faster and smarter.
Chain-of-Query: Unleashing the Power of LLMs in SQL-Aided Table Understanding via Multi-Agent Collaboration
Computation and Language
Helps computers understand data in tables better.