Score: 0

A Time- and Energy-Efficient CNN with Dense Connections on Memristor-Based Chips

Published: August 17, 2025 | arXiv ID: 2508.12251v1

By: Wenyong Zhou , Yuan Ren , Jiajun Zhou and more

Potential Business Impact:

Makes AI chips faster and use less power.

Designing lightweight convolutional neural network (CNN) models is an active research area in edge AI. Compute-in-memory (CIM) provides a new computing paradigm to alleviate time and energy consumption caused by data transfer in von Neumann architecture. Among competing alternatives, resistive random-access memory (RRAM) is a promising CIM device owing to its reliability and multi-bit programmability. However, classical lightweight designs such as depthwise convolution incurs under-utilization of RRAM crossbars restricted by their inherently dense weight-to-RRAM cell mapping. To build an RRAM-friendly yet efficient CNN, we evaluate the hardware cost of DenseNet which maintains a high accuracy vs other CNNs at a small parameter count. Observing the linearly increasing channels in DenseNet leads to a low crossbar utilization and causes large latency and energy consumption, we propose a scheme that concatenates feature maps of front layers to form the input of the last layer in each stage. Experiments show that our proposed model consumes less time and energy than conventional ResNet and DenseNet, while producing competitive accuracy on CIFAR and ImageNet datasets.

Page Count
4 pages

Category
Computer Science:
Hardware Architecture