Score: 1

Iteratively reweighted kernel machines efficiently learn sparse functions

Published: May 13, 2025 | arXiv ID: 2505.08277v1

By: Libin Zhu , Damek Davis , Dmitriy Drusvyatskiy and more

Potential Business Impact:

Finds important patterns in data automatically.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

The impressive practical performance of neural networks is often attributed to their ability to learn low-dimensional data representations and hierarchical structure directly from data. In this work, we argue that these two phenomena are not unique to neural networks, and can be elicited from classical kernel methods. Namely, we show that the derivative of the kernel predictor can detect the influential coordinates with low sample complexity. Moreover, by iteratively using the derivatives to reweight the data and retrain kernel machines, one is able to efficiently learn hierarchical polynomials with finite leap complexity. Numerical experiments illustrate the developed theory.

Repos / Data Links

Page Count
113 pages

Category
Statistics:
Machine Learning (Stat)