Reverse Thinking Enhances Missing Information Detection in Large Language Models
By: Yuxin Liu , Chaojie Gu , Yihang Zhang and more
Potential Business Impact:
Helps computers find missing puzzle pieces.
Large Language Models (LLMs) have demonstrated remarkable capabilities in various reasoning tasks, yet they often struggle with problems involving missing information, exhibiting issues such as incomplete responses, factual errors, and hallucinations. While forward reasoning approaches like Chain-of-Thought (CoT) and Tree-of-Thought (ToT) have shown success in structured problem-solving, they frequently fail to systematically identify and recover omitted information. In this paper, we explore the potential of reverse thinking methodologies to enhance LLMs' performance on missing information detection tasks. Drawing inspiration from recent work on backward reasoning, we propose a novel framework that guides LLMs through reverse thinking to identify necessary conditions and pinpoint missing elements. Our approach transforms the challenging task of missing information identification into a more manageable backward reasoning problem, significantly improving model accuracy. Experimental results demonstrate that our reverse thinking approach achieves substantial performance gains compared to traditional forward reasoning methods, providing a promising direction for enhancing LLMs' logical completeness and reasoning robustness.
Similar Papers
Reason from Future: Reverse Thought Chain Enhances LLM Reasoning
Artificial Intelligence
Helps computers solve hard problems by thinking backward.
When Thinking Fails: The Pitfalls of Reasoning for Instruction-Following in LLMs
Computation and Language
Makes AI follow instructions better by fixing reasoning.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think step-by-step" to solve harder problems.