Implicit Bias and Invariance: How Hopfield Networks Efficiently Learn Graph Orbits
By: Michael Murray , Tenzin Chan , Kedar Karhadker and more
Many learning problems involve symmetries, and while invariance can be built into neural architectures, it can also emerge implicitly when training on group-structured data. We study this phenomenon in classical Hopfield networks and show they can infer the full isomorphism class of a graph from a small random sample. Our results reveal that: (i) graph isomorphism classes can be represented within a three-dimensional invariant subspace, (ii) using gradient descent to minimize energy flow (MEF) has an implicit bias toward norm-efficient solutions, which underpins a polynomial sample complexity bound for learning isomorphism classes, and (iii) across multiple learning rules, parameters converge toward the invariant subspace as sample sizes grow. Together, these findings highlight a unifying mechanism for generalization in Hopfield networks: a bias toward norm efficiency in learning drives the emergence of approximate invariance under group-structured data.
Similar Papers
Formalized Hopfield Networks and Boltzmann Machines
Machine Learning (CS)
Makes computer brains learn and remember things better.
Implicit Hypergraph Neural Network
Machine Learning (CS)
Helps computers understand complex connections better.
Hopfield Networks Meet Big Data: A Brain-Inspired Deep Learning Framework for Semantic Data Linking
Machine Learning (CS)
Connects different data so computers understand it.