A general framework for deep learning
By: William Kengne, Modou Wade
This paper develops a general approach for deep learning for a setting that includes nonparametric regression and classification. We perform a framework from data that fulfills a generalized Bernstein-type inequality, including independent, $φ$-mixing, strongly mixing and $\mathcal{C}$-mixing observations. Two estimators are proposed: a non-penalized deep neural network estimator (NPDNN) and a sparse-penalized deep neural network estimator (SPDNN). For each of these estimators, bounds of the expected excess risk on the class of Hölder smooth functions and composition Hölder functions are established. Applications to independent data, as well as to $φ$-mixing, strongly mixing, $\mathcal{C}$-mixing processes are considered. For each of these examples, the upper bounds of the expected excess risk of the proposed NPDNN and SPDNN predictors are derived. It is shown that both the NPDNN and SPDNN estimators are minimax optimal (up to a logarithmic factor) in many classical settings.
Similar Papers
Statistically guided deep learning
Statistics Theory
Makes computer learning more accurate and faster.
A Nonparametric Statistics Approach to Feature Selection in Deep Neural Networks with Theoretical Guarantees
Machine Learning (Stat)
Finds important clues in messy data.
Inferring Outcome Means of Exponential Family Distributions Estimated by Deep Neural Networks
Machine Learning (Stat)
Helps doctors predict patient health risks better.