Modern Methods in Associative Memory
By: Dmitry Krotov , Benjamin Hoover , Parikshit Ram and more
Potential Business Impact:
Helps computers remember and learn like brains.
Associative Memories like the famous Hopfield Networks are elegant models for describing fully recurrent neural networks whose fundamental job is to store and retrieve information. In the past few years they experienced a surge of interest due to novel theoretical results pertaining to their information storage capabilities, and their relationship with SOTA AI architectures, such as Transformers and Diffusion Models. These connections open up possibilities for interpreting the computation of traditional AI networks through the theoretical lens of Associative Memories. Additionally, novel Lagrangian formulations of these networks make it possible to design powerful distributed models that learn useful representations and inform the design of novel architectures. This tutorial provides an approachable introduction to Associative Memories, emphasizing the modern language and methods used in this area of research, with practical hands-on mathematical derivations and coding notebooks.
Similar Papers
Neural Learning Rules from Associative Networks Theory
Neurons and Cognition
Teaches computers to learn from memories.
Memorization to Generalization: Emergence of Diffusion Models from Associative Memory
Machine Learning (CS)
Teaches AI to remember and create new things.
Associative Memory Model with Neural Networks: Memorizing multiple images with one neuron
Neural and Evolutionary Computing
Computer remembers many pictures in one spot.