Score: 1

Language models can learn implicit multi-hop reasoning, but only if they have lots of training data

Published: May 23, 2025 | arXiv ID: 2505.17923v1

By: Yuekun Yao , Yupei Du , Dawei Zhu and more

Potential Business Impact:

Computers learn to solve hard problems faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Implicit reasoning is the ability of a language model to solve multi-hop reasoning tasks in a single forward pass, without chain of thought. We investigate this capability using GPT2-style language models trained from scratch on controlled $k$-hop reasoning datasets ($k = 2, 3, 4$). We show that while such models can indeed learn implicit $k$-hop reasoning, the required training data grows exponentially in $k$, and the required number of transformer layers grows linearly in $k$. We offer a theoretical explanation for why this depth growth is necessary. We further find that the data requirement can be mitigated, but not eliminated, through curriculum learning.

Country of Origin
🇩🇪 🇳🇱 Germany, Netherlands

Page Count
19 pages

Category
Computer Science:
Computation and Language