Deep Networks Learn Deep Hierarchical Models
By: Amit Daniely
Potential Business Impact:
Teaches computers to learn like humans do.
We consider supervised learning with $n$ labels and show that layerwise SGD on residual networks can efficiently learn a class of hierarchical models. This model class assumes the existence of an (unknown) label hierarchy $L_1 \subseteq L_2 \subseteq \dots \subseteq L_r = [n]$, where labels in $L_1$ are simple functions of the input, while for $i > 1$, labels in $L_i$ are simple functions of simpler labels. Our class surpasses models that were previously shown to be learnable by deep learning algorithms, in the sense that it reaches the depth limit of efficient learnability. That is, there are models in this class that require polynomial depth to express, whereas previous models can be computed by log-depth circuits. Furthermore, we suggest that learnability of such hierarchical models might eventually form a basis for understanding deep learning. Beyond their natural fit for domains where deep learning excels, we argue that the mere existence of human ``teachers" supports the hypothesis that hierarchical structures are inherently available. By providing granular labels, teachers effectively reveal ``hints'' or ``snippets'' of the internal algorithms used by the brain. We formalize this intuition, showing that in a simplified model where a teacher is partially aware of their internal logic, a hierarchical structure emerges that facilitates efficient learnability.
Similar Papers
Provable Learning of Random Hierarchy Models and Hierarchical Shallow-to-Deep Chaining
Machine Learning (CS)
Proves deep networks learn complex patterns better.
The Computational Advantage of Depth: Learning High-Dimensional Hierarchical Functions with Gradient Descent
Machine Learning (Stat)
Deep learning finds patterns faster than simple methods.
Nested Learning: The Illusion of Deep Learning Architectures
Machine Learning (CS)
Helps computers learn and remember like humans.