Score: 1

Search-Based Correction of Reasoning Chains for Language Models

Published: May 17, 2025 | arXiv ID: 2505.11824v1

By: Minsu Kim , Jean-Pierre Falet , Oliver E. Richardson and more

Potential Business Impact:

Fixes AI mistakes in its thinking steps.

Business Areas:
Semantic Search Internet Services

Chain-of-Thought (CoT) reasoning has advanced the capabilities and transparency of language models (LMs); however, reasoning chains can contain inaccurate statements that reduce performance and trustworthiness. To address this, we introduce a new self-correction framework that augments each reasoning step in a CoT with a latent variable indicating its veracity, enabling modeling of all possible truth assignments rather than assuming correctness throughout. To efficiently explore this expanded space, we introduce Search Corrector, a discrete search algorithm over boolean-valued veracity assignments. It efficiently performs otherwise intractable inference in the posterior distribution over veracity assignments by leveraging the LM's joint likelihood over veracity and the final answer as a proxy reward. This efficient inference-time correction method facilitates supervised fine-tuning of an Amortized Corrector by providing pseudo-labels for veracity. The Amortized Corrector generalizes self-correction, enabling accurate zero-shot veracity inference in novel contexts. Empirical results demonstrate that Search Corrector reliably identifies errors in logical (ProntoQA) and mathematical reasoning (GSM8K) benchmarks. The Amortized Corrector achieves comparable zero-shot accuracy and improves final answer accuracy by up to 25%.

Country of Origin
🇰🇷 🇨🇦 Korea, Republic of, Canada

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)