Score: 0

ResNets Are Deeper Than You Think

Published: June 17, 2025 | arXiv ID: 2506.14386v1

By: Christian H. X. Ali Mehmeti-Göpel, Michael Wand

Potential Business Impact:

Makes computer learning better by changing how it learns.

Business Areas:
Darknet Internet Services

Residual connections remain ubiquitous in modern neural network architectures nearly a decade after their introduction. Their widespread adoption is often credited to their dramatically improved trainability: residual networks train faster, more stably, and achieve higher accuracy than their feedforward counterparts. While numerous techniques, ranging from improved initialization to advanced learning rate schedules, have been proposed to close the performance gap between residual and feedforward networks, this gap has persisted. In this work, we propose an alternative explanation: residual networks do not merely reparameterize feedforward networks, but instead inhabit a different function space. We design a controlled post-training comparison to isolate generalization performance from trainability; we find that variable-depth architectures, similar to ResNets, consistently outperform fixed-depth networks, even when optimization is unlikely to make a difference. These results suggest that residual connections confer performance advantages beyond optimization, pointing instead to a deeper inductive bias aligned with the structure of natural data.

Country of Origin
🇩🇪 Germany

Page Count
24 pages

Category
Computer Science:
Machine Learning (CS)