Do Latent Tokens Think? A Causal and Adversarial Analysis of Chain-of-Continuous-Thought
By: Yuyi Zhang , Boyu Tang , Tianjie Ju and more
Potential Business Impact:
Makes AI trick itself into thinking it's smart.
Latent tokens are gaining attention for enhancing reasoning in large language models (LLMs), yet their internal mechanisms remain unclear. This paper examines the problem from a reliability perspective, uncovering fundamental weaknesses: latent tokens function as uninterpretable placeholders rather than encoding faithful reasoning. While resistant to perturbation, they promote shortcut usage over genuine reasoning. We focus on Chain-of-Continuous-Thought (COCONUT), which claims better efficiency and stability than explicit Chain-of-Thought (CoT) while maintaining performance. We investigate this through two complementary approaches. First, steering experiments perturb specific token subsets, namely COCONUT and explicit CoT. Unlike CoT tokens, COCONUT tokens show minimal sensitivity to steering and lack reasoning-critical information. Second, shortcut experiments evaluate models under biased and out-of-distribution settings. Results on MMLU and HotpotQA demonstrate that COCONUT consistently exploits dataset artifacts, inflating benchmark performance without true reasoning. These findings reposition COCONUT as a pseudo-reasoning mechanism: it generates plausible traces that conceal shortcut dependence rather than faithfully representing reasoning processes.
Similar Papers
Continuous Chain of Thought Enables Parallel Exploration and Reasoning
Machine Learning (CS)
Lets computers think in more ways at once.
Latent Chain-of-Thought for Visual Reasoning
Artificial Intelligence
Makes AI think step-by-step better for new problems.
Token Assorted: Mixing Latent and Text Tokens for Improved Language Model Reasoning
Computation and Language
Makes AI think faster by shortening its steps.