A Novel Architecture for Symbolic Reasoning with Decision Trees and LLM Agents
By: Andrew Kiruluta
Potential Business Impact:
Makes AI understand and solve problems better.
We propose a hybrid architecture that integrates decision tree-based symbolic reasoning with the generative capabilities of large language models (LLMs) within a coordinated multi-agent framework. Unlike prior approaches that loosely couple symbolic and neural modules, our design embeds decision trees and random forests as callable oracles within a unified reasoning system. Tree-based modules enable interpretable rule inference and causal logic, while LLM agents handle abductive reasoning, generalization, and interactive planning. A central orchestrator maintains belief state consistency and mediates communication across agents and external tools, enabling reasoning over both structured and unstructured inputs. The system achieves strong performance on reasoning benchmarks. On \textit{ProofWriter}, it improves entailment consistency by +7.2\% through logic-grounded tree validation. On GSM8k, it achieves +5.3\% accuracy gains in multistep mathematical problems via symbolic augmentation. On \textit{ARC}, it boosts abstraction accuracy by +6.0\% through integration of symbolic oracles. Applications in clinical decision support and scientific discovery show how the system encodes domain rules symbolically while leveraging LLMs for contextual inference and hypothesis generation. This architecture offers a robust, interpretable, and extensible solution for general-purpose neuro-symbolic reasoning.
Similar Papers
Current Practices for Building LLM-Powered Reasoning Tools Are Ad Hoc -- and We Can Do Better
Artificial Intelligence
Makes smart computer programs reason better and safer.
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.
ART: Adaptive Reasoning Trees for Explainable Claim Verification
Artificial Intelligence
Helps AI explain its answers so we can trust it.