On the geometry and topology of representations: the manifolds of modular addition
By: Gabriela Moisescu-Pareja , Gavin McCracken , Harley Wiltzer and more
The Clock and Pizza interpretations, associated with architectures differing in either uniform or learnable attention, were introduced to argue that different architectural designs can yield distinct circuits for modular addition. In this work, we show that this is not the case, and that both uniform attention and trainable attention architectures implement the same algorithm via topologically and geometrically equivalent representations. Our methodology goes beyond the interpretation of individual neurons and weights. Instead, we identify all of the neurons corresponding to each learned representation and then study the collective group of neurons as one entity. This method reveals that each learned representation is a manifold that we can study utilizing tools from topology. Based on this insight, we can statistically analyze the learned representations across hundreds of circuits to demonstrate the similarity between learned modular addition circuits that arise naturally from common deep learning paradigms.
Similar Papers
Persistent Topological Structures and Cohomological Flows as a Mathematical Framework for Brain-Inspired Representation Learning
Machine Learning (CS)
Helps computers understand brain patterns better.
Emergent Riemannian geometry over learning discrete computations on continuous manifolds
Machine Learning (CS)
Helps computers learn to make decisions from pictures.
The Geometry of Abstraction: Continual Learning via Recursive Quotienting
Machine Learning (CS)
Helps computers remember everything without forgetting.