Reasoning Beyond Language: A Comprehensive Survey on Latent Chain-of-Thought Reasoning
By: Xinghao Chen , Anhao Zhao , Heming Xia and more
Potential Business Impact:
Lets computers think faster without words.
Large Language Models (LLMs) have achieved impressive performance on complex reasoning tasks with Chain-of-Thought (CoT) prompting. However, conventional CoT relies on reasoning steps explicitly verbalized in natural language, introducing inefficiencies and limiting its applicability to abstract reasoning. To address this, there has been growing research interest in latent CoT reasoning, where inference occurs within latent spaces. By decoupling reasoning from language, latent reasoning promises richer cognitive representations and more flexible, faster inference. Researchers have explored various directions in this promising field, including training methodologies, structural innovations, and internal reasoning mechanisms. This paper presents a comprehensive overview and analysis of this reasoning paradigm. We begin by proposing a unified taxonomy from four perspectives: token-wise strategies, internal mechanisms, analysis, and applications. We then provide in-depth discussions and comparative analyses of representative methods, highlighting their design patterns, strengths, and open challenges. We aim to provide a structured foundation for advancing this emerging direction in LLM reasoning. The relevant papers will be regularly updated at https://github.com/EIT-NLP/Awesome-Latent-CoT.
Similar Papers
A Survey on Latent Reasoning
Computation and Language
Lets computers think faster without words.
Towards Reasoning Era: A Survey of Long Chain-of-Thought for Reasoning Large Language Models
Artificial Intelligence
Makes computers think deeper to solve hard problems.
From Perception to Reasoning: Deep Thinking Empowers Multimodal Large Language Models
Computation and Language
Helps AI "think step-by-step" to solve harder problems.