Score: 0

Finite-Width Neural Tangent Kernels from Feynman Diagrams

Published: August 15, 2025 | arXiv ID: 2508.11522v2

By: Max Guillen, Philipp Misof, Jan E. Gerken

Potential Business Impact:

Explains how computer brains learn better.

Neural tangent kernels (NTKs) are a powerful tool for analyzing deep, non-linear neural networks. In the infinite-width limit, NTKs can easily be computed for most common architectures, yielding full analytic control over the training dynamics. However, at infinite width, important properties of training such as NTK evolution or feature learning are absent. Nevertheless, finite width effects can be included by computing corrections to the Gaussian statistics at infinite width. We introduce Feynman diagrams for computing finite-width corrections to NTK statistics. These dramatically simplify the necessary algebraic manipulations and enable the computation of layer-wise recursive relations for arbitrary statistics involving preactivations, NTKs and certain higher-derivative tensors (dNTK and ddNTK) required to predict the training dynamics at leading order. We demonstrate the feasibility of our framework by extending stability results for deep networks from preactivations to NTKs and proving the absence of finite-width corrections for scale-invariant nonlinearities such as ReLU on the diagonal of the Gram matrix of the NTK. We validate our results with numerical experiments.

Country of Origin
πŸ‡ΈπŸ‡ͺ Sweden

Page Count
40 pages

Category
Computer Science:
Machine Learning (CS)