On the Depth of Monotone ReLU Neural Networks and ICNNs
By: Egor Bakaev , Florestan Brunck , Christoph Hertrich and more
Potential Business Impact:
Makes computers better at finding the biggest number.
We study two models of ReLU neural networks: monotone networks (ReLU$^+$) and input convex neural networks (ICNN). Our focus is on expressivity, mostly in terms of depth, and we prove the following lower bounds. For the maximum function MAX$_n$ computing the maximum of $n$ real numbers, we show that ReLU$^+$ networks cannot compute MAX$_n$, or even approximate it. We prove a sharp $n$ lower bound on the ICNN depth complexity of MAX$_n$. We also prove depth separations between ReLU networks and ICNNs; for every $k$, there is a depth-2 ReLU network of size $O(k^2)$ that cannot be simulated by a depth-$k$ ICNN. The proofs are based on deep connections between neural networks and polyhedral geometry, and also use isoperimetric properties of triangulations.
Similar Papers
Convexity in ReLU Neural Networks: beyond ICNNs?
Machine Learning (CS)
Makes smart computer programs understand math better.
Better Neural Network Expressivity: Subdividing the Simplex
Machine Learning (CS)
Makes computers learn faster with fewer steps.
On the Expressiveness of Rational ReLU Neural Networks With Bounded Depth
Machine Learning (CS)
Makes computer brains learn more with fewer steps.