Score: 1

Implicit Reasoning in Transformers is Reasoning through Shortcuts

Published: March 10, 2025 | arXiv ID: 2503.07604v3

By: Tianhe Lin , Jian Xie , Siyu Yuan and more

Potential Business Impact:

Teaches computers to solve problems by copying patterns.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Test-time compute is emerging as a new paradigm for enhancing language models' complex multi-step reasoning capabilities, as demonstrated by the success of OpenAI's o1 and o3, as well as DeepSeek's R1. Compared to explicit reasoning in test-time compute, implicit reasoning is more inference-efficient, requiring fewer generated tokens. However, why does the advanced reasoning capability fail to emerge in the implicit reasoning style? In this work, we train GPT-2 from scratch on a curated multi-step mathematical reasoning dataset and conduct analytical experiments to investigate how language models perform implicit reasoning in multi-step tasks. Our findings reveal: 1) Language models can perform step-by-step reasoning and achieve high accuracy in both in-domain and out-of-domain tests via implicit reasoning. However, this capability only emerges when trained on fixed-pattern data. 2) Conversely, implicit reasoning abilities emerging from training on unfixed-pattern data tend to overfit a specific pattern and fail to generalize further. Notably, this limitation is also observed in state-of-the-art large language models. These findings suggest that language models acquire implicit reasoning through shortcut learning, enabling strong performance on tasks with similar patterns while lacking generalization.

Country of Origin
🇨🇳 China

Page Count
18 pages

Category
Computer Science:
Computation and Language