Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
By: Xiao-Wen Yang , Jie-Jing Shao , Lan-Zhe Guo and more
Potential Business Impact:
Teaches AI to think better and solve harder problems.
Large Language Models (LLMs) have shown promising results across various tasks, yet their reasoning capabilities remain a fundamental challenge. Developing AI systems with strong reasoning capabilities is regarded as a crucial milestone in the pursuit of Artificial General Intelligence (AGI) and has garnered considerable attention from both academia and industry. Various techniques have been explored to enhance the reasoning capabilities of LLMs, with neuro-symbolic approaches being a particularly promising way. This paper comprehensively reviews recent developments in neuro-symbolic approaches for enhancing LLM reasoning. We first present a formalization of reasoning tasks and give a brief introduction to the neurosymbolic learning paradigm. Then, we discuss neuro-symbolic methods for improving the reasoning capabilities of LLMs from three perspectives: Symbolic->LLM, LLM->Symbolic, and LLM+Symbolic. Finally, we discuss several key challenges and promising future directions. We have also released a GitHub repository including papers and resources related to this survey: https://github.com/LAMDASZ-ML/Awesome-LLM-Reasoning-with-NeSy.
Similar Papers
A Comparative Study of Neurosymbolic AI Approaches to Interpretable Logical Reasoning
Artificial Intelligence
Makes AI think logically like humans.
Advancing Symbolic Integration in Large Language Models: Beyond Conventional Neurosymbolic AI
Artificial Intelligence
Makes smart computer answers easier to understand.
Intermediate Languages Matter: Formal Languages and LLMs affect Neurosymbolic Reasoning
Artificial Intelligence
Better AI reasoning by picking the right "thinking language."