Super-fast rates of convergence for Neural Networks Classifiers under the Hard Margin Condition
By: Nathanael Tepakbong, Ding-Xuan Zhou, Xiang Zhou
Potential Business Impact:
Makes smart computers learn better from less data.
We study the classical binary classification problem for hypothesis spaces of Deep Neural Networks (DNNs) with ReLU activation under Tsybakov's low-noise condition with exponent $q>0$, and its limit-case $q\to\infty$ which we refer to as the "hard-margin condition". We show that DNNs which minimize the empirical risk with square loss surrogate and $\ell_p$ penalty can achieve finite-sample excess risk bounds of order $\mathcal{O}\left(n^{-\alpha}\right)$ for arbitrarily large $\alpha>0$ under the hard-margin condition, provided that the regression function $\eta$ is sufficiently smooth. The proof relies on a novel decomposition of the excess risk which might be of independent interest.
Similar Papers
Optimal Convergence Rates of Deep Neural Network Classifiers
Machine Learning (Stat)
Makes computers learn better with less data.
Online Learning of Neural Networks
Machine Learning (Stat)
Teaches computers to learn faster with fewer mistakes.
Minimax learning rates for estimating binary classifiers under margin conditions
Machine Learning (Stat)
Helps computers learn faster from data.