Do Depth-Grown Models Overcome the Curse of Depth? An In-Depth Analysis
By: Ferdinand Kapl , Emmanouil Angelis , Tobias Höppe and more
Gradually growing the depth of Transformers during training can not only reduce training cost but also lead to improved reasoning performance, as shown by MIDAS (Saunshi et al., 2024). Thus far, however, a mechanistic understanding of these gains has been missing. In this work, we establish a connection to recent work showing that layers in the second half of non-grown, pre-layernorm Transformers contribute much less to the final output distribution than those in the first half - also known as the Curse of Depth (Sun et al., 2025, Csordás et al., 2025). Using depth-wise analyses, we demonstrate that growth via gradual middle stacking yields more effective utilization of model depth, alters the residual stream structure, and facilitates the formation of permutable computational blocks. In addition, we propose a lightweight modification of MIDAS that yields further improvements in downstream reasoning benchmarks. Overall, this work highlights how the gradual growth of model depth can lead to the formation of distinct computational circuits and overcome the limited depth utilization seen in standard non-grown models.
Similar Papers
Exploring Depth Generalization in Large Language Models for Solving Recursive Logic Tasks
Artificial Intelligence
Helps computers solve deeply nested problems.
Deep Progressive Training: scaling up depth capacity of zero/one-layer models
Machine Learning (CS)
Trains big computer brains faster, saving energy.
A Little Depth Goes a Long Way: The Expressive Power of Log-Depth Transformers
Machine Learning (CS)
Makes computers better at understanding long, step-by-step problems.