New Approximation Results and Optimal Estimation for Fully Connected Deep Neural Networks
By: Zhaoji Tang
Potential Business Impact:
Makes smart computer programs learn faster and better.
\citet{farrell2021deep} establish non-asymptotic high-probability bounds for general deep feedforward neural network (with rectified linear unit activation function) estimators, with \citet[Theorem 1]{farrell2021deep} achieving a suboptimal convergence rate for fully connected feedforward networks. The authors suggest that improved approximation of fully connected networks could yield sharper versions of \citet[Theorem 1]{farrell2021deep} without altering the theoretical framework. By deriving approximation bounds specifically for a narrower fully connected deep neural network, this note demonstrates that \citet[Theorem 1]{farrell2021deep} can be improved to achieve an optimal rate (up to a logarithmic factor). Furthermore, this note briefly shows that deep neural network estimators can mitigate the curse of dimensionality for functions with compositional structure and functions defined on manifolds.
Similar Papers
An in-depth look at approximation via deep and narrow neural networks
Machine Learning (CS)
Makes AI learn better by fixing its mistakes.
Statistically guided deep learning
Statistics Theory
Makes computer learning more accurate and faster.
Expressive Power of Deep Networks on Manifolds: Simultaneous Approximation
Numerical Analysis
Helps computers solve hard math problems on curved shapes.