Mechanistic Interpretability of Large-Scale Counting in LLMs through a System-2 Strategy
By: Hosein Hasani , Mohammadali Banayeeanzade , Ali Nafisi and more
Potential Business Impact:
Helps computers count much bigger numbers accurately.
Large language models (LLMs), despite strong performance on complex mathematical problems, exhibit systematic limitations in counting tasks. This issue arises from architectural limits of transformers, where counting is performed across layers, leading to degraded precision for larger counting problems due to depth constraints. To address this limitation, we propose a simple test-time strategy inspired by System-2 cognitive processes that decomposes large counting tasks into smaller, independent sub-problems that the model can reliably solve. We evaluate this approach using observational and causal mediation analyses to understand the underlying mechanism of this System-2-like strategy. Our mechanistic analysis identifies key components: latent counts are computed and stored in the final item representations of each part, transferred to intermediate steps via dedicated attention heads, and aggregated in the final stage to produce the total count. Experimental results demonstrate that this strategy enables LLMs to surpass architectural limitations and achieve high accuracy on large-scale counting tasks. This work provides mechanistic insight into System-2 counting in LLMs and presents a generalizable approach for improving and understanding their reasoning behavior.
Similar Papers
Sequential Enumeration in Large Language Models
Artificial Intelligence
Computers still struggle to count items in lists.
Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking
Computation and Language
Teaches computers to think fast or slow.
Reasoning on a Spectrum: Aligning LLMs to System 1 and System 2 Thinking
Computation and Language
Makes computers think faster and smarter.