Emergence of Structure in Ensembles of Random Neural Networks
By: Luca Muscarnera , Luigi Loreti , Giovanni Todeschini and more
Potential Business Impact:
Makes computers learn better from random guesses.
Randomness is ubiquitous in many applications across data science and machine learning. Remarkably, systems composed of random components often display emergent global behaviors that appear deterministic, manifesting a transition from microscopic disorder to macroscopic organization. In this work, we introduce a theoretical model for studying the emergence of collective behaviors in ensembles of random classifiers. We argue that, if the ensemble is weighted through the Gibbs measure defined by adopting the classification loss as an energy, then there exists a finite temperature parameter for the distribution such that the classification is optimal, with respect to the loss (or the energy). Interestingly, for the case in which samples are generated by a Gaussian distribution and labels are constructed by employing a teacher perceptron, we analytically prove and numerically confirm that such optimal temperature does not depend neither on the teacher classifier (which is, by construction of the learning problem, unknown), nor on the number of random classifiers, highlighting the universal nature of the observed behavior. Experiments on the MNIST dataset underline the relevance of this phenomenon in high-quality, noiseless, datasets. Finally, a physical analogy allows us to shed light on the self-organizing nature of the studied phenomenon.
Similar Papers
High-entropy Advantage in Neural Networks' Generalizability
Machine Learning (CS)
Makes computers learn better by using "energy" ideas.
Neural Thermodynamics I: Entropic Forces in Deep and Universal Representation Learning
Machine Learning (CS)
Explains how AI learns by using "entropic forces."
Emergence of Computational Structure in a Neural Network Physics Simulator
Machine Learning (CS)
Finds hidden "thinking parts" inside computer brains.