Resistive Memory based Efficient Machine Unlearning and Continual Learning
By: Ning Lin , Jichang Yang , Yangu He and more
Resistive memory (RM) based neuromorphic systems can emulate synaptic plasticity and thus support continual learning, but they generally lack biologically inspired mechanisms for active forgetting, which are critical for meeting modern data privacy requirements. Algorithmic forgetting, or machine unlearning, seeks to remove the influence of specific data from trained models to prevent memorization of sensitive information and the generation of harmful content, yet existing exact and approximate unlearning schemes incur prohibitive programming overheads on RM hardware owing to device variability and iterative write-verify cycles. Analogue implementations of continual learning face similar barriers. Here we present a hardware-software co-design that enables an efficient training, deployment and inference pipeline for machine unlearning and continual learning on RM accelerators. At the software level, we introduce a low-rank adaptation (LoRA) framework that confines updates to compact parameter branches, substantially reducing the number of trainable parameters and therefore the training cost. At the hardware level, we develop a hybrid analogue-digital compute-in-memory system in which well-trained weights are stored in analogue RM arrays, whereas dynamic LoRA updates are implemented in a digital computing unit with SRAM buffer. This hybrid architecture avoids costly reprogramming of analogue weights and maintains high energy efficiency during inference. Fabricated in a 180 nm CMOS process, the prototype achieves up to a 147.76-fold reduction in training cost, a 387.95-fold reduction in deployment overhead and a 48.44-fold reduction in inference energy across privacy-sensitive tasks including face recognition, speaker authentication and stylized image generation, paving the way for secure and efficient neuromorphic intelligence at the edge.
Similar Papers
Reconfigurable Digital RRAM Logic Enables In-Situ Pruning and Learning for Edge AI
Hardware Architecture
Makes AI learn faster and use less power.
When Forgetting Builds Reliability: LLM Unlearning for Reliable Hardware Code Generation
Machine Learning (CS)
Cleans AI code for safer computer chip making.
Forget Forgetting: Continual Learning in a World of Abundant Memory
Machine Learning (CS)
Teaches computers new things without forgetting old ones.