An in-depth look at approximation via deep and narrow neural networks
By: Joris Dommel, Sven A. Wegner
Potential Business Impact:
Makes AI learn better by fixing its mistakes.
In 2017, Hanin and Sellke showed that the class of arbitrarily deep, real-valued, feed-forward and ReLU-activated networks of width w forms a dense subset of the space of continuous functions on R^n, with respect to the topology of uniform convergence on compact sets, if and only if w>n holds. To show the necessity, a concrete counterexample function f:R^n->R was used. In this note we actually approximate this very f by neural networks in the two cases w=n and w=n+1 around the aforementioned threshold. We study how the approximation quality behaves if we vary the depth and what effect (spoiler alert: dying neurons) cause that behavior.
Similar Papers
Nonlocal techniques for the analysis of deep ReLU neural network approximations
Machine Learning (CS)
Makes AI learn better from fewer examples.
Provable wavelet-based neural approximation
Machine Learning (CS)
Helps computers learn any pattern, even tricky ones.
On shallow feedforward neural networks with inputs from a topological space
Machine Learning (CS)
Maps shapes to numbers for computers.