Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks
By: Sina Bagheri Nezhad, Ameeta Agrawal
Potential Business Impact:
Helps computers understand and connect many facts.
Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce NeuroSymbolic Augmented Reasoning (NSAR), which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.
Similar Papers
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.
Adaptive LLM-Symbolic Reasoning via Dynamic Logical Solver Composition
Computation and Language
Computers learn to solve problems using logic.
Enhancing Large Language Models through Neuro-Symbolic Integration and Ontological Reasoning
Artificial Intelligence
Makes AI answers more truthful and logical.