Score: 0

Do Latent Tokens Think? A Causal and Adversarial Analysis of Chain-of-Continuous-Thought

Published: December 25, 2025 | arXiv ID: 2512.21711v1

By: Yuyi Zhang , Boyu Tang , Tianjie Ju and more

Potential Business Impact:

Makes AI trick itself into thinking it's smart.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Latent tokens are gaining attention for enhancing reasoning in large language models (LLMs), yet their internal mechanisms remain unclear. This paper examines the problem from a reliability perspective, uncovering fundamental weaknesses: latent tokens function as uninterpretable placeholders rather than encoding faithful reasoning. While resistant to perturbation, they promote shortcut usage over genuine reasoning. We focus on Chain-of-Continuous-Thought (COCONUT), which claims better efficiency and stability than explicit Chain-of-Thought (CoT) while maintaining performance. We investigate this through two complementary approaches. First, steering experiments perturb specific token subsets, namely COCONUT and explicit CoT. Unlike CoT tokens, COCONUT tokens show minimal sensitivity to steering and lack reasoning-critical information. Second, shortcut experiments evaluate models under biased and out-of-distribution settings. Results on MMLU and HotpotQA demonstrate that COCONUT consistently exploits dataset artifacts, inflating benchmark performance without true reasoning. These findings reposition COCONUT as a pseudo-reasoning mechanism: it generates plausible traces that conceal shortcut dependence rather than faithfully representing reasoning processes.

Country of Origin
🇨🇳 China

Page Count
13 pages

Category
Computer Science:
Computation and Language