Score: 0

Reasoning Beyond Chain-of-Thought: A Latent Computational Mode in Large Language Models

Published: January 12, 2026 | arXiv ID: 2601.08058v1

By: Zhenghao He , Guangzhi Xiong , Bohan Liu and more

Potential Business Impact:

Makes computers think better without extra instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Chain-of-Thought (CoT) prompting has improved the reasoning performance of large language models (LLMs), but it remains unclear why it works and whether it is the unique mechanism for triggering reasoning in large language models. In this work, we study this question by directly analyzing and intervening on the internal representations of LLMs with Sparse Autoencoders (SAEs), identifying a small set of latent features that are causally associated with LLM reasoning behavior. Across multiple model families and reasoning benchmarks, we find that steering a single reasoning-related latent feature can substantially improve accuracy without explicit CoT prompting. For large models, latent steering achieves performance comparable to standard CoT prompting while producing more efficient outputs. We further observe that this reasoning-oriented internal state is triggered early in generation and can override prompt-level instructions that discourage explicit reasoning. Overall, our results suggest that multi-step reasoning in LLMs is supported by latent internal activations that can be externally activated, while CoT prompting is one effective, but not unique, way of activating this mechanism rather than its necessary cause.

Page Count
15 pages

Category
Computer Science:
Computation and Language