Score: 1

Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition

Published: May 13, 2025 | arXiv ID: 2505.08262v1

By: Nathanael Tepakbong, Ding-Xuan Zhou, Xiang Zhou

Potential Business Impact:

Makes smart computers learn better from less data.

Business Areas:
A/B Testing Data and Analytics

We study the classical binary classification problem for hypothesis spaces of Deep Neural Networks (DNNs) with ReLU activation under Tsybakov's low-noise condition with exponent $q>0$, and its limit-case $q\to\infty$ which we refer to as the "hard-margin condition". We show that DNNs which minimize the empirical risk with square loss surrogate and $\ell_p$ penalty can achieve finite-sample excess risk bounds of order $\mathcal{O}\left(n^{-\alpha}\right)$ for arbitrarily large $\alpha>0$ under the hard-margin condition, provided that the regression function $\eta$ is sufficiently smooth. The proof relies on a novel decomposition of the excess risk which might be of independent interest.

Country of Origin
🇦🇺 🇭🇰 Australia, Hong Kong

Page Count
31 pages

Category
Computer Science:
Machine Learning (CS)