Score: 1

Do Sparse Autoencoders Identify Reasoning Features in Language Models?

Published: January 9, 2026 | arXiv ID: 2601.05679v1

By: George Ma , Zhongyuan Liang , Irene Y. Chen and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Finds AI "thinking" is just word tricks.

Business Areas:
Semantic Search Internet Services

We investigate whether sparse autoencoders (SAEs) identify genuine reasoning features in large language models (LLMs). Starting from features selected using standard contrastive activation methods, we introduce a falsification-oriented framework that combines causal token injection experiments and LLM-guided falsification to test whether feature activation reflects reasoning processes or superficial linguistic correlates. Across 20 configurations spanning multiple model families, layers, and reasoning datasets, we find that identified reasoning features are highly sensitive to token-level interventions. Injecting a small number of feature-associated tokens into non-reasoning text is sufficient to elicit strong activation for 59% to 94% of features, indicating reliance on lexical artifacts. For the remaining features that are not explained by simple token triggers, LLM-guided falsification consistently produces non-reasoning inputs that activate the feature and reasoning inputs that do not, with no analyzed feature satisfying our criteria for genuine reasoning behavior. Steering these features yields minimal changes or slight degradations in benchmark performance. Together, these results suggest that SAE features identified by contrastive approaches primarily capture linguistic correlates of reasoning rather than the underlying reasoning computations themselves.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
38 pages

Category
Computer Science:
Machine Learning (CS)