RNNs perform task computations by dynamically warping neural representations
By: Arthur Pellegrino, Angus Chadwick
Potential Business Impact:
Makes computers understand how they learn.
Analysing how neural networks represent data features in their activations can help interpret how they perform tasks. Hence, a long line of work has focused on mathematically characterising the geometry of such "neural representations." In parallel, machine learning has seen a surge of interest in understanding how dynamical systems perform computations on time-varying input data. Yet, the link between computation-through-dynamics and representational geometry remains poorly understood. Here, we hypothesise that recurrent neural networks (RNNs) perform computations by dynamically warping their representations of task variables. To test this hypothesis, we develop a Riemannian geometric framework that enables the derivation of the manifold topology and geometry of a dynamical system from the manifold of its inputs. By characterising the time-varying geometry of RNNs, we show that dynamic warping is a fundamental feature of their computations.
Similar Papers
Emergent Riemannian geometry over learning discrete computations on continuous manifolds
Machine Learning (CS)
Helps computers learn to make decisions from pictures.
Neural Feature Geometry Evolves as Discrete Ricci Flow
Machine Learning (CS)
Helps computers learn better by understanding shapes.
Mechanistic Interpretability of RNNs emulating Hidden Markov Models
Machine Learning (CS)
Makes brains learn to make choices randomly.