Mitigating the Curse of Detail: Scaling Arguments for Feature Learning and Sample Complexity
By: Noa Rubin, Orit Davidovich, Zohar Ringel
Potential Business Impact:
Predicts how computer "brains" learn patterns faster.
Two pressing topics in the theory of deep learning are the interpretation of feature learning mechanisms and the determination of implicit bias of networks in the rich regime. Current theories of rich feature learning effects revolve around networks with one or two trainable layers or deep linear networks. Furthermore, even under such limiting settings, predictions often appear in the form of high-dimensional non-linear equations, which require computationally intensive numerical solutions. Given the many details that go into defining a deep learning problem, this analytical complexity is a significant and often unavoidable challenge. Here, we propose a powerful heuristic route for predicting the data and width scales at which various patterns of feature learning emerge. This form of scale analysis is considerably simpler than such exact theories and reproduces the scaling exponents of various known results. In addition, we make novel predictions on complex toy architectures, such as three-layer non-linear networks and attention heads, thus extending the scope of first-principle theories of deep learning.
Similar Papers
Neural Scaling Laws for Deep Regression
Machine Learning (CS)
Improves computer predictions with more data.
Navigating High Dimensional Concept Space with Metalearning
Machine Learning (CS)
Teaches computers to learn new ideas fast.
Statistical physics of deep learning: Optimal learning of a multi-layer perceptron near interpolation
Machine Learning (Stat)
Helps computers learn better from lots of information.