DART: Distilling Autoregressive Reasoning to Silent Thought
By: Nan Jiang , Ziming Wu , De-Chuan Zhan and more
Potential Business Impact:
Makes AI think faster without losing answers.
Chain-of-Thought (CoT) reasoning has significantly advanced Large Language Models (LLMs) in solving complex tasks. However, its autoregressive paradigm leads to significant computational overhead, hindering its deployment in latency-sensitive applications. To address this, we propose \textbf{DART} (\textbf{D}istilling \textbf{A}utoregressive \textbf{R}easoning to Silent \textbf{T}hought), a self-distillation framework that enables LLMs to replace autoregressive CoT with non-autoregressive Silent Thought (ST). Specifically, DART introduces two training pathways: the CoT pathway for traditional reasoning and the ST pathway for generating answers directly from a few ST tokens. The ST pathway utilizes a lightweight Reasoning Evolvement Module (REM) to align its hidden states with the CoT pathway, enabling the ST tokens to evolve into informative embeddings. During inference, only the ST pathway is activated, leveraging evolving ST tokens to deliver the answer directly. Extensive experimental results demonstrate that DART offers significant performance gains compared with existing non-autoregressive baselines without extra inference latency, serving as a feasible alternative for efficient reasoning.
Similar Papers
Beyond Templates: Dynamic Adaptation of Reasoning Demonstrations via Feasibility-Aware Exploration
Computation and Language
Teaches small computers to think like big ones.
Discovery and Reinforcement of Tool-Integrated Reasoning Chains via Rollout Trees
Computation and Language
Teaches computers to use tools for harder problems.
DiffCoT: Diffusion-styled Chain-of-Thought Reasoning in LLMs
Computation and Language
Fixes math mistakes in AI step-by-step thinking.