Score: 1

How does Transformer Learn Implicit Reasoning?

Published: May 29, 2025 | arXiv ID: 2505.23653v1

By: Jiaran Ye , Zijun Yao , Zhidian Huang and more

Potential Business Impact:

Teaches computers to think step-by-step.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent work suggests that large language models (LLMs) can perform multi-hop reasoning implicitly -- producing correct answers without explicitly verbalizing intermediate steps -- but the underlying mechanisms remain poorly understood. In this paper, we study how such implicit reasoning emerges by training transformers from scratch in a controlled symbolic environment. Our analysis reveals a three-stage developmental trajectory: early memorization, followed by in-distribution generalization, and eventually cross-distribution generalization. We find that training with atomic triples is not necessary but accelerates learning, and that second-hop generalization relies on query-level exposure to specific compositional structures. To interpret these behaviors, we introduce two diagnostic tools: cross-query semantic patching, which identifies semantically reusable intermediate representations, and a cosine-based representational lens, which reveals that successful reasoning correlates with the cosine-base clustering in hidden space. This clustering phenomenon in turn provides a coherent explanation for the behavioral dynamics observed across training, linking representational structure to reasoning capability. These findings provide new insights into the interpretability of implicit multi-hop reasoning in LLMs, helping to clarify how complex reasoning processes unfold internally and offering pathways to enhance the transparency of such models.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
22 pages

Category
Computer Science:
Machine Learning (CS)