Intermediate Languages Matter: Formal Languages and LLMs affect Neurosymbolic Reasoning
By: Alexander Beiser, David Penz, Nysret Musliu
Potential Business Impact:
Better AI reasoning by picking the right "thinking language."
Large language models (LLMs) achieve astonishing results on a wide range of tasks. However, their formal reasoning ability still lags behind. A promising approach is Neurosymbolic LLM reasoning. It works by using LLMs as translators from natural to formal languages and symbolic solvers for deriving correct results. Still, the contributing factors to the success of Neurosymbolic LLM reasoning remain unclear. This paper demonstrates that one previously overlooked factor is the choice of the formal language. We introduce the intermediate language challenge: selecting a suitable formal language for neurosymbolic reasoning. By comparing four formal languages across three datasets and seven LLMs, we show that the choice of formal language affects both syntactic and semantic reasoning capabilities. We also discuss the varying effects across different LLMs.
Similar Papers
Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models
Artificial Intelligence
Teaches AI to think better and solve harder problems.
Investigating Language Model Capabilities to Represent and Process Formal Knowledge: A Preliminary Study to Assist Ontology Engineering
Artificial Intelligence
Helps small computers reason better with logic.
A Neurosymbolic Approach to Natural Language Formalization and Verification
Computation and Language
Makes AI follow rules perfectly, like a robot lawyer.