Random feature approximation for general spectral methods
By: Mike Nguyen, Nicole Mücke
Potential Business Impact:
Makes AI learn better and faster.
Random feature approximation is arguably one of the most widely used techniques for kernel methods in large-scale learning algorithms. In this work, we analyze the generalization properties of random feature methods, extending previous results for Tikhonov regularization to a broad class of spectral regularization techniques. This includes not only explicit methods but also implicit schemes such as gradient descent and accelerated algorithms like the Heavy-Ball and Nesterov method. Through this framework, we enable a theoretical analysis of neural networks and neural operators through the lens of the Neural Tangent Kernel (NTK) approach trained via gradient descent. For our estimators we obtain optimal learning rates over regularity classes (even for classes that are not included in the reproducing kernel Hilbert space), which are defined through appropriate source conditions. This improves or completes previous results obtained in related settings for specific kernel algorithms.
Similar Papers
Cauchy Random Features for Operator Learning in Sobolev Space
Machine Learning (CS)
Teaches computers to learn math faster.
Random at First, Fast at Last: NTK-Guided Fourier Pre-Processing for Tabular DL
Machine Learning (CS)
Makes computer learning faster and better.
Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation
Data Structures and Algorithms
Speeds up computer learning with complex math.