Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
By: Bradley P. Allen , Prateek Chhikara , Thomas Macaulay Ferguson and more
Potential Business Impact:
Makes smart computers think more logically.
Large language models (LLMs) have demonstrated impressive capabilities in natural language understanding and generation, but they exhibit problems with logical consistency in the output they generate. How can we harness LLMs' broad-coverage parametric knowledge in formal reasoning despite their inconsistency? We present a method for directly integrating an LLM into the interpretation function of the formal semantics for a paraconsistent logic. We provide experimental evidence for the feasibility of the method by evaluating the function using datasets created from several short-form factuality benchmarks. Unlike prior work, our method offers a theoretical framework for neuro-symbolic reasoning that leverages an LLM's knowledge while preserving the underlying logic's soundness and completeness properties.
Similar Papers
Sound and Complete Neurosymbolic Reasoning with LLM-Grounded Interpretations
Artificial Intelligence
Makes smart computers think logically, not just talk.
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.
Enhancing Large Language Models through Neuro-Symbolic Integration and Ontological Reasoning
Artificial Intelligence
Makes AI answers more truthful and logical.