CIMinus: Empowering Sparse DNN Workloads Modeling and Exploration on SRAM-based CIM Architectures
By: Yingjie Qi , Jianlei Yang , Rubing Yang and more
Potential Business Impact:
Helps computers learn faster by using less energy.
Compute-in-memory (CIM) has emerged as a pivotal direction for accelerating workloads in the field of machine learning, such as Deep Neural Networks (DNNs). However, the effective exploitation of sparsity in CIM systems presents numerous challenges, due to the inherent limitations in their rigid array structures. Designing sparse DNN dataflows and developing efficient mapping strategies also become more complex when accounting for diverse sparsity patterns and the flexibility of a multi-macro CIM structure. Despite these complexities, there is still an absence of a unified systematic view and modeling approach for diverse sparse DNN workloads in CIM systems. In this paper, we propose CIMinus, a framework dedicated to cost modeling for sparse DNN workloads on CIM architectures. It provides an in-depth energy consumption analysis at the level of individual components and an assessment of the overall workload latency. We validate CIMinus against contemporary CIM architectures and demonstrate its applicability in two use-cases. These cases provide valuable insights into both the impact of sparsity patterns and the effectiveness of mapping strategies, bridging the gap between theoretical design and practical implementation.
Similar Papers
A Time- and Energy-Efficient CNN with Dense Connections on Memristor-Based Chips
Hardware Architecture
Makes AI chips faster and use less power.
Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs
Hardware Architecture
Makes smart computer programs run faster on small devices.
Computing-In-Memory Dataflow for Minimal Buffer Traffic
Hardware Architecture
Makes AI chips faster and use less power.