On efficiently computable functions, deep networks and sparse compositionality
By: Tomaso Poggio
Potential Business Impact:
Makes computers learn complex tasks with fewer steps.
We show that \emph{efficient Turing computability} at any fixed input/output precision implies the existence of \emph{compositionally sparse} (bounded-fan-in, polynomial-size) DAG representations and of corresponding neural approximants achieving the target precision. Concretely: if $f:[0,1]^d\to\R^m$ is computable in time polynomial in the bit-depths, then for every pair of precisions $(n,m_{\mathrm{out}})$ there exists a bounded-fan-in Boolean circuit of size and depth $\poly(n+m_{\mathrm{out}})$ computing the discretized map; replacing each gate by a constant-size neural emulator yields a deep network of size/depth $\poly(n+m_{\mathrm{out}})$ that achieves accuracy $\varepsilon=2^{-m_{\mathrm{out}}}$. We also relate these constructions to compositional approximation rates \cite{MhaskarPoggio2016b,poggio_deep_shallow_2017,Poggio2017,Poggio2023HowDS} and to optimization viewed as hierarchical search over sparse structures.
Similar Papers
Position: A Theory of Deep Learning Must Include Compositional Sparsity
Machine Learning (CS)
Lets computers learn complex things by breaking them down.
On the expressivity of sparse maxout networks
Machine Learning (CS)
Makes computer brains learn more with less data.
Sparse Computations in Deep Learning Inference
Computational Engineering, Finance, and Science
Makes AI faster and use less power.