First Demonstration of Second-order Training of Deep Neural Networks with In-memory Analog Matrix Computing
By: Saitao Zhang , Yubiao Luo , Shiqing Wang and more
Potential Business Impact:
Makes AI learn much faster and use less power.
Second-order optimization methods, which leverage curvature information, offer faster and more stable convergence than first-order methods such as stochastic gradient descent (SGD) and Adam. However, their practical adoption is hindered by the prohibitively high cost of inverting the second-order information matrix, particularly in large-scale neural network training. Here, we present the first demonstration of a second-order optimizer powered by in-memory analog matrix computing (AMC) using resistive random-access memory (RRAM), which performs matrix inversion (INV) in a single step. We validate the optimizer by training a two-layer convolutional neural network (CNN) for handwritten letter classification, achieving 26% and 61% fewer training epochs than SGD with momentum and Adam, respectively. On a larger task using the same second-order method, our system delivers a 5.88x improvement in throughput and a 6.9x gain in energy efficiency compared to state-of-the-art digital processors. These results demonstrate the feasibility and effectiveness of AMC circuits for second-order neural network training, opening a new path toward energy-efficient AI acceleration.
Similar Papers
Modeling Closed-loop Analog Matrix Computing Circuits with Interconnect Resistance
Emerging Technologies
Makes computer chips solve math problems faster.
RRAM-Based Analog Matrix Computing for Massive MIMO Signal Processing: A Review
Signal Processing
Makes wireless signals faster and more reliable.
In-memory Training on Analog Devices with Limited Conductance States via Multi-tile Residual Learning
Machine Learning (CS)
Trains AI better with cheaper, simpler computer parts.