Finite-Width Neural Tangent Kernels from Feynman Diagrams
By: Max Guillen, Philipp Misof, Jan E. Gerken
Potential Business Impact:
Explains how computer brains learn better.
Neural tangent kernels (NTKs) are a powerful tool for analyzing deep, non-linear neural networks. In the infinite-width limit, NTKs can easily be computed for most common architectures, yielding full analytic control over the training dynamics. However, at infinite width, important properties of training such as NTK evolution or feature learning are absent. Nevertheless, finite width effects can be included by computing corrections to the Gaussian statistics at infinite width. We introduce Feynman diagrams for computing finite-width corrections to NTK statistics. These dramatically simplify the necessary algebraic manipulations and enable the computation of layer-wise recursive relations for arbitrary statistics involving preactivations, NTKs and certain higher-derivative tensors (dNTK and ddNTK) required to predict the training dynamics at leading order. We demonstrate the feasibility of our framework by extending stability results for deep networks from preactivations to NTKs and proving the absence of finite-width corrections for scale-invariant nonlinearities such as ReLU on the diagonal of the Gram matrix of the NTK. We validate our results with numerical experiments.
Similar Papers
Finite-Width Neural Tangent Kernels from Feynman Diagrams
Machine Learning (CS)
Helps computers learn better by understanding tiny changes.
Mathematical Foundations of Neural Tangents and Infinite-Width Networks
Machine Learning (CS)
Makes AI learn better and faster.
The Spectral Dimension of NTKs is Constant: A Theory of Implicit Regularization, Finite-Width Stability, and Scalable Estimation
Machine Learning (CS)
Helps computers learn better with fewer mistakes.