Dense Associative Memories with Analog Circuits
By: Marc Gong Bacvanski , Xincheng You , John Hopfield and more
The increasing computational demands of modern AI systems have exposed fundamental limitations of digital hardware, driving interest in alternative paradigms for efficient large-scale inference. Dense Associative Memory (DenseAM) is a family of models that offers a flexible framework for representing many contemporary neural architectures, such as transformers and diffusion models, by casting them as dynamical systems evolving on an energy landscape. In this work, we propose a general method for building analog accelerators for DenseAMs and implementing them using electronic RC circuits, crossbar arrays, and amplifiers. We find that our analog DenseAM hardware performs inference in constant time independent of model size. This result highlights an asymptotic advantage of analog DenseAMs over digital numerical solvers that scale at least linearly with the model size. We consider three settings of progressively increasing complexity: XOR, the Hamming (7,4) code, and a simple language model defined on binary variables. We propose analog implementations of these three models and analyze the scaling of inference time, energy consumption, and hardware. Finally, we estimate lower bounds on the achievable time constants imposed by amplifier specifications, suggesting that even conservative existing analog technology can enable inference times on the order of tens to hundreds of nanoseconds. By harnessing the intrinsic parallelism and continuous-time operation of analog circuits, our DenseAM-based accelerator design offers a new avenue for fast and scalable AI hardware.
Similar Papers
Distributed Dynamic Associative Memory via Online Convex Optimization
Machine Learning (CS)
Helps many computers learn together faster.
In-memory Training on Analog Devices with Limited Conductance States via Multi-tile Residual Learning
Machine Learning (CS)
Trains AI better with cheaper, simpler computer parts.
A Time- and Energy-Efficient CNN with Dense Connections on Memristor-Based Chips
Hardware Architecture
Makes AI chips faster and use less power.