Improving Symbolic Translation of Language Models for Logical Reasoning
By: Ramya Keerthy Thatikonda , Jiuzhou Han , Wray Buntine and more
The use of formal language for deductive logical reasoning aligns well with language models (LMs), where translating natural language (NL) into first-order logic (FOL) and employing an external solver results in a verifiable and therefore reliable reasoning system. However, smaller LMs often struggle with this translation task, frequently producing incorrect symbolic outputs due to formatting and translation errors. Existing approaches typically rely on self-iteration to correct these errors, but such methods depend heavily on the capabilities of the underlying model. To address this, we first categorize common errors and fine-tune smaller LMs using data synthesized by large language models. The evaluation is performed using the defined error categories. We introduce incremental inference, which divides inference into two stages, predicate generation and FOL translation, providing greater control over model behavior and enhancing generation quality as measured by predicate metrics. This decomposition framework also enables the use of a verification module that targets predicate-arity errors to further improve performance. Our study evaluates three families of models across four logical-reasoning datasets. The comprehensive fine-tuning, incremental inference, and verification modules reduce error rates, increase predicate coverage, and improve reasoning performance for smaller LMs, moving us closer to developing reliable and accessible symbolic-reasoning systems.
Similar Papers
Investigating Language Model Capabilities to Represent and Process Formal Knowledge: A Preliminary Study to Assist Ontology Engineering
Artificial Intelligence
Helps small computers reason better with logic.
From Implicit to Explicit: Token-Efficient Logical Supervision for Mathematical Reasoning in LLMs
Computation and Language
Teaches computers to think step-by-step for math.
From Hypothesis to Premises: LLM-based Backward Logical Reasoning with Selective Symbolic Translation
Computation and Language
Helps AI think backward to solve problems better.