Constructing a Neuro-Symbolic Mathematician from First Principles
By: Keqin Xie
Potential Business Impact:
Makes computers think logically like mathematicians.
Large Language Models (LLMs) exhibit persistent logical failures in complex reasoning due to the lack of an internal axiomatic framework. We propose Mathesis, a neuro-symbolic architecture that encodes mathematical states as higher-order hypergraphs and uses a Symbolic Reasoning Kernel (SRK)--a differentiable logic engine that maps constraints to a continuous energy landscape. By defining a global energy function E(G), where zero energy implies logical consistency, the SRK yields gradient-based signals to train a Hypergraph Transformer Brain, turning proof search into energy minimization. Multi-step deduction is enabled via Monte Carlo Tree Search and Evolutionary Proof Search, guided by learned value functions and semantic unification.
Similar Papers
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.
Towards a Neurosymbolic Reasoning System Grounded in Schematic Representations
Artificial Intelligence
Helps computers think logically like people.
Current Practices for Building LLM-Powered Reasoning Tools Are Ad Hoc -- and We Can Do Better
Artificial Intelligence
Makes smart computer programs reason better and safer.