Score: 0

Extrapolation by Association: Length Generalization Transfer in Transformers

Published: June 10, 2025 | arXiv ID: 2506.09251v2

By: Ziyang Cai , Nayoung Lee , Avi Schwarzschild and more

Potential Business Impact:

Helps computers learn longer tasks from similar ones.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Transformer language models have demonstrated impressive generalization capabilities in natural language domains, yet we lack a fine-grained understanding of how such generalization arises. In this paper, we investigate length generalization--the ability to extrapolate from shorter to longer inputs--through the lens of \textit{task association}. We find that length generalization can be \textit{transferred} across related tasks. That is, training a model with a longer and related auxiliary task can lead it to generalize to unseen and longer inputs from some other target task. We demonstrate this length generalization transfer across diverse algorithmic tasks, including arithmetic operations, string transformations, and maze navigation. Our results show that transformer models can inherit generalization capabilities from similar tasks when trained jointly. Moreover, we observe similar transfer effects in pretrained language models, suggesting that pretraining equips models with reusable computational scaffolding that facilitates extrapolation in downstream settings. Finally, we provide initial mechanistic evidence that length generalization transfer correlates with the re-use of the same attention heads between the tasks. Together, our findings deepen our understanding of how transformers generalize to out-of-distribution inputs and highlight the compositional reuse of inductive structure across tasks.

Country of Origin
🇺🇸 United States

Page Count
23 pages

Category
Computer Science:
Computation and Language