Robustness of Neurosymbolic Reasoners on First-Order Logic Problems
By: Hannah Bansal, Kemal Kurniawan, Lea Frermann
Potential Business Impact:
Makes computers reason better with tricky logic.
Recent trends in NLP aim to improve reasoning capabilities in Large Language Models (LLMs), with key focus on generalization and robustness to variations in tasks. Counterfactual task variants introduce minimal but semantically meaningful changes to otherwise valid first-order logic (FOL) problem instances altering a single predicate or swapping roles of constants to probe whether a reasoning system can maintain logical consistency under perturbation. Previous studies showed that LLMs becomes brittle on counterfactual variations, suggesting that they often rely on spurious surface patterns to generate responses. In this work, we explore if a neurosymbolic (NS) approach that integrates an LLM and a symbolic logical solver could mitigate this problem. Experiments across LLMs of varying sizes show that NS methods are more robust but perform worse overall that purely neural methods. We then propose NSCoT that combines an NS method and Chain-of-Thought (CoT) prompting and demonstrate that while it improves performance, NSCoT still lags behind standard CoT. Our analysis opens research directions for future work.
Similar Papers
Adaptive LLM-Symbolic Reasoning via Dynamic Logical Solver Composition
Computation and Language
Computers learn to solve problems using logic.
Non-Iterative Symbolic-Aided Chain-of-Thought for Logical Reasoning
Artificial Intelligence
Helps computers think through problems better.
Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks
Computation and Language
Helps computers understand and connect many facts.