Score: 1

Training NTK to Generalize with KARE

Published: May 16, 2025 | arXiv ID: 2505.11347v2

By: Johannes Schwab , Bryan Kelly , Semyon Malamud and more

Potential Business Impact:

Trains computers to learn better than before.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The performance of the data-dependent neural tangent kernel (NTK; Jacot et al. (2018)) associated with a trained deep neural network (DNN) often matches or exceeds that of the full network. This implies that DNN training via gradient descent implicitly performs kernel learning by optimizing the NTK. In this paper, we propose instead to optimize the NTK explicitly. Rather than minimizing empirical risk, we train the NTK to minimize its generalization error using the recently developed Kernel Alignment Risk Estimator (KARE; Jacot et al. (2020)). Our simulations and real data experiments show that NTKs trained with KARE consistently match or significantly outperform the original DNN and the DNN- induced NTK (the after-kernel). These results suggest that explicitly trained kernels can outperform traditional end-to-end DNN optimization in certain settings, challenging the conventional dominance of DNNs. We argue that explicit training of NTK is a form of over-parametrized feature learning.

Country of Origin
πŸ‡¨πŸ‡­ πŸ‡ΊπŸ‡Έ Switzerland, United States

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)