A Balanced Neuro-Symbolic Approach for Commonsense Abductive Logic
By: Joseph Cotnareanu , Didier Chetelat , Yingxue Zhang and more
Potential Business Impact:
Teaches computers to solve hard problems using logic.
Although Large Language Models (LLMs) have demonstrated impressive formal reasoning abilities, they often break down when problems require complex proof planning. One promising approach for improving LLM reasoning abilities involves translating problems into formal logic and using a logic solver. Although off-the-shelf logic solvers are in principle substantially more efficient than LLMs at logical reasoning, they assume that all relevant facts are provided in a question and are unable to deal with missing commonsense relations. In this work, we propose a novel method that uses feedback from the logic solver to augment a logic problem with commonsense relations provided by the LLM, in an iterative manner. This involves a search procedure through potential commonsense assumptions to maximize the chance of finding useful facts while keeping cost tractable. On a collection of pure-logical reasoning datasets, from which some commonsense information has been removed, our method consistently achieves considerable improvements over existing techniques, demonstrating the value in balancing neural and symbolic elements when working in human contexts.
Similar Papers
Sound and Complete Neuro-symbolic Reasoning with LLM-Grounded Interpretations
Artificial Intelligence
Makes smart computers think more logically.
A Comparative Study of Neurosymbolic AI Approaches to Interpretable Logical Reasoning
Artificial Intelligence
Makes AI think logically like humans.
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.