Saddle Hierarchy in Dense Associative Memory
By: Robin Thériault, Daniele Tantari
Potential Business Impact:
Makes AI learn faster and use less power.
Dense associative memory (DAM) models have been attracting renewed attention since they were shown to be robust to adversarial examples and closely related to state-of-the-art machine learning paradigms, such as the attention mechanisms in transformers and generative diffusion models. We study a DAM built upon a three-layer Boltzmann machine with Potts hidden units, which represent data clusters and classes. Through a statistical mechanics analysis, we derive saddle-point equations that characterize both the stationary points of DAMs trained on real data and the fixed points of DAMs trained on synthetic data within a teacher-student framework. Based on these results, we propose a novel regularization scheme that makes training significantly more stable. Moreover, we show empirically that our DAM learns interpretable solutions to both supervised and unsupervised classification problems. Pushing our theoretical analysis further, we find that the weights learned by relatively small DAMs correspond to unstable saddle points in larger DAMs. We implement a network-growing algorithm that leverages this saddle-point hierarchy to drastically reduce the computational cost of training dense associative memory.
Similar Papers
Dense associative memory on the Bures-Wasserstein space
Machine Learning (CS)
Stores and finds information as complex ideas, not just simple lists.
Distributed Dynamic Associative Memory via Online Convex Optimization
Machine Learning (CS)
Helps many computers learn together faster.
Dense Associative Memories with Analog Circuits
Neural and Evolutionary Computing
Makes AI think super fast, no matter how big.