Score: 0

Enhancing Large Language Models with Neurosymbolic Reasoning for Multilingual Tasks

Published: June 3, 2025 | arXiv ID: 2506.02483v1

By: Sina Bagheri Nezhad, Ameeta Agrawal

Potential Business Impact:

Helps computers understand and connect many facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) often struggle to perform multi-target reasoning in long-context scenarios where relevant information is scattered across extensive documents. To address this challenge, we introduce NeuroSymbolic Augmented Reasoning (NSAR), which combines the benefits of neural and symbolic reasoning during inference. NSAR explicitly extracts symbolic facts from text and generates executable Python code to handle complex reasoning steps. Through extensive experiments across seven languages and diverse context lengths, we demonstrate that NSAR significantly outperforms both a vanilla RAG baseline and advanced prompting strategies in accurately identifying and synthesizing multiple pieces of information. Our results highlight the effectiveness of combining explicit symbolic operations with neural inference for robust, interpretable, and scalable reasoning in multilingual settings.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Computation and Language