A Class of Random-Kernel Network Models
By: James Tian
Potential Business Impact:
Makes computers learn faster with less guessing.
We introduce random-kernel networks, a multilayer extension of random feature models where depth is created by deterministic kernel composition and randomness enters only in the outermost layer. We prove that deeper constructions can approximate certain functions with fewer Monte Carlo samples than any shallow counterpart, establishing a depth separation theorem in sample complexity.
Similar Papers
Learning Multi-Index Models with Hyper-Kernel Ridge Regression
Machine Learning (Stat)
Helps computers learn complex tasks better than before.
Mathematical Foundations of Neural Tangents and Infinite-Width Networks
Machine Learning (CS)
Makes AI learn better and faster.
Solving Approximation Tasks with Greedy Deep Kernel Methods
Numerical Analysis
Makes computers learn better by stacking simple math.