A mathematical theory for understanding when abstract representations emerge in neural networks
By: Bin Wang, W. Jeffrey Johnston, Stefano Fusi
Potential Business Impact:
Brain learns to understand things by practicing.
Recent experiments reveal that task-relevant variables are often encoded in approximately orthogonal subspaces of the neural activity space. These disentangled low-dimensional representations are observed in multiple brain areas and across different species, and are typically the result of a process of abstraction that supports simple forms of out-of-distribution generalization. The mechanisms by which such geometries emerge remain poorly understood, and the mechanisms that have been investigated are typically unsupervised (e.g., based on variational auto-encoders). Here, we show mathematically that abstract representations of latent variables are guaranteed to appear in the last hidden layer of feedforward nonlinear networks when they are trained on tasks that depend directly on these latent variables. These abstract representations reflect the structure of the desired outputs or the semantics of the input stimuli. To investigate the neural representations that emerge in these networks, we develop an analytical framework that maps the optimization over the network weights into a mean-field problem over the distribution of neural preactivations. Applying this framework to a finite-width ReLU network, we find that its hidden layer exhibits an abstract representation at all global minima of the task objective. We further extend these analyses to two broad families of activation functions and deep feedforward architectures, demonstrating that abstract representations naturally arise in all these scenarios. Together, these results provide an explanation for the widely observed abstract representations in both the brain and artificial neural networks, as well as a mathematically tractable toolkit for understanding the emergence of different kinds of representations in task-optimized, feature-learning network models.
Similar Papers
From superposition to sparse codes: interpretable representations in neural networks
Machine Learning (CS)
Helps computers understand what they see like humans.
Why all roads don't lead to Rome: Representation geometry varies across the human visual cortical hierarchy
Neurons and Cognition
Brain and AI learn best when goals change.
Semantic representations emerge in biologically inspired ensembles of cross-supervising neural networks
Neurons and Cognition
Brain networks learn by teaching each other.