From Chains to Graphs: Self-Structured Reasoning for General-Domain LLMs
By: Yingjian Chen , Haoran Liu , Yinhong Liu and more
Potential Business Impact:
Helps computers think better by drawing thought maps.
Large Language Models (LLMs) show strong reasoning ability in open-domain question answering, yet their reasoning processes are typically linear and often logically inconsistent. In contrast, real-world reasoning requires integrating multiple premises and solving subproblems in parallel. Existing methods, such as Chain-of-Thought (CoT), express reasoning in a linear textual form, which may appear coherent but frequently leads to inconsistent conclusions. Recent approaches rely on externally provided graphs and do not explore how LLMs can construct and use their own graph-structured reasoning, particularly in open-domain QA. To fill this gap, we novelly explore graph-structured reasoning of LLMs in general-domain question answering. We propose Self-Graph Reasoning (SGR), a framework that enables LLMs to explicitly represent their reasoning process as a structured graph before producing the final answer. We further construct a graph-structured reasoning dataset that merges multiple candidate reasoning graphs into refined graph structures for model training. Experiments on five QA benchmarks across both general and specialized domains show that SGR consistently improves reasoning consistency and yields a 17.74% gain over the base model. The LLaMA-3.3-70B model fine-tuned with SGR performs comparably to GPT-4o and surpasses Claude-3.5-Haiku, demonstrating the effectiveness of graph-structured reasoning.
Similar Papers
A Stepwise-Enhanced Reasoning Framework for Large Language Models Based on External Subgraph Generation
Computation and Language
Helps computers think better by using facts.
Grounding LLM Reasoning with Knowledge Graphs
Computation and Language
Makes AI answers trustworthy and easy to check.
Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning
Computation and Language
Helps computers understand complex stories better.