Score: 1

Special Session: Sustainable Deployment of Deep Neural Networks on Non-Volatile Compute-in-Memory Accelerators

Published: August 17, 2025 | arXiv ID: 2508.12195v1

By: Yifan Qin , Zheyu Yan , Wujie Wen and more

Potential Business Impact:

Makes AI chips work better and last longer.

Non-volatile memory (NVM) based compute-in-memory (CIM) accelerators have emerged as a sustainable solution to significantly boost energy efficiency and minimize latency for Deep Neural Networks (DNNs) inference due to their in-situ data processing capabilities. However, the performance of NVCIM accelerators degrades because of the stochastic nature and intrinsic variations of NVM devices. Conventional write-verify operations, which enhance inference accuracy through iterative writing and verification during deployment, are costly in terms of energy and time. Inspired by negative feedback theory, we present a novel negative optimization training mechanism to achieve robust DNN deployment for NVCIM. We develop an Oriented Variational Forward (OVF) training method to implement this mechanism. Experiments show that OVF outperforms existing state-of-the-art techniques with up to a 46.71% improvement in inference accuracy while reducing epistemic uncertainty. This mechanism reduces the reliance on write-verify operations and thus contributes to the sustainable and practical deployment of NVCIM accelerators, addressing performance degradation while maintaining the benefits of sustainable computing with NVCIM accelerators.

Page Count
4 pages

Category
Computer Science:
Hardware Architecture