Score: 0

Information-theoretic reduction of deep neural networks to linear models in the overparametrized proportional regime

Published: May 6, 2025 | arXiv ID: 2505.03577v1

By: Francesco Camilli , Daria Tieplova , Eleonora Bergamin and more

Potential Business Impact:

Makes smart computer programs work like simple math.

Business Areas:
A/B Testing Data and Analytics

We rigorously analyse fully-trained neural networks of arbitrary depth in the Bayesian optimal setting in the so-called proportional scaling regime where the number of training samples and width of the input and all inner layers diverge proportionally. We prove an information-theoretic equivalence between the Bayesian deep neural network model trained from data generated by a teacher with matching architecture, and a simpler model of optimal inference in a generalized linear model. This equivalence enables us to compute the optimal generalization error for deep neural networks in this regime. We thus prove the "deep Gaussian equivalence principle" conjectured in Cui et al. (2023) (arXiv:2302.00375). Our result highlights that in order to escape this "trivialisation" of deep neural networks (in the sense of reduction to a linear model) happening in the strongly overparametrized proportional regime, models trained from much more data have to be considered.

Country of Origin
🇮🇹 Italy

Page Count
41 pages

Category
Mathematics:
Statistics Theory