Score: 3

Convergence and Sketching-Based Efficient Computation of Neural Tangent Kernel Weights in Physics-Based Loss

Published: November 19, 2025 | arXiv ID: 2511.15530v1

By: Max Hirsch, Federico Pichi

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes AI learn faster and better by balancing goals.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In multi-objective optimization, multiple loss terms are weighted and added together to form a single objective. These weights are chosen to properly balance the competing losses according to some meta-goal. For example, in physics-informed neural networks (PINNs), these weights are often adaptively chosen to improve the network's generalization error. A popular choice of adaptive weights is based on the neural tangent kernel (NTK) of the PINN, which describes the evolution of the network in predictor space during training. The convergence of such an adaptive weighting algorithm is not clear a priori. Moreover, these NTK-based weights would be updated frequently during training, further increasing the computational burden of the learning process. In this paper, we prove that under appropriate conditions, gradient descent enhanced with adaptive NTK-based weights is convergent in a suitable sense. We then address the problem of computational efficiency by developing a randomized algorithm inspired by a predictor-corrector approach and matrix sketching, which produces unbiased estimates of the NTK up to an arbitrarily small discretization error. Finally, we provide numerical experiments to support our theoretical findings and to show the efficacy of our randomized algorithm. Code Availability: https://github.com/maxhirsch/Efficient-NTK

Country of Origin
🇺🇸 🇮🇹 United States, Italy

Repos / Data Links

Page Count
29 pages

Category
Mathematics:
Numerical Analysis (Math)