Score: 0

An in-depth look at approximation via deep and narrow neural networks

Published: October 8, 2025 | arXiv ID: 2510.07202v1

By: Joris Dommel, Sven A. Wegner

Potential Business Impact:

Makes AI learn better by fixing its mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

In 2017, Hanin and Sellke showed that the class of arbitrarily deep, real-valued, feed-forward and ReLU-activated networks of width w forms a dense subset of the space of continuous functions on R^n, with respect to the topology of uniform convergence on compact sets, if and only if w>n holds. To show the necessity, a concrete counterexample function f:R^n->R was used. In this note we actually approximate this very f by neural networks in the two cases w=n and w=n+1 around the aforementioned threshold. We study how the approximation quality behaves if we vary the depth and what effect (spoiler alert: dying neurons) cause that behavior.

Country of Origin
🇩🇪 Germany

Page Count
11 pages

Category
Computer Science:
Machine Learning (CS)