Score: 1

Robustness of Neurosymbolic Reasoners on First-Order Logic Problems

Published: September 22, 2025 | arXiv ID: 2509.17377v1

By: Hannah Bansal, Kemal Kurniawan, Lea Frermann

Potential Business Impact:

Makes computers reason better with tricky logic.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent trends in NLP aim to improve reasoning capabilities in Large Language Models (LLMs), with key focus on generalization and robustness to variations in tasks. Counterfactual task variants introduce minimal but semantically meaningful changes to otherwise valid first-order logic (FOL) problem instances altering a single predicate or swapping roles of constants to probe whether a reasoning system can maintain logical consistency under perturbation. Previous studies showed that LLMs becomes brittle on counterfactual variations, suggesting that they often rely on spurious surface patterns to generate responses. In this work, we explore if a neurosymbolic (NS) approach that integrates an LLM and a symbolic logical solver could mitigate this problem. Experiments across LLMs of varying sizes show that NS methods are more robust but perform worse overall that purely neural methods. We then propose NSCoT that combines an NS method and Chain-of-Thought (CoT) prompting and demonstrate that while it improves performance, NSCoT still lags behind standard CoT. Our analysis opens research directions for future work.

Country of Origin
🇦🇺 Australia

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language