Score: 1

Sparse Computations in Deep Learning Inference

Published: December 2, 2025 | arXiv ID: 2512.02550v1

By: Ioanna Tasou , Panagiotis Mpakos , Angelos Vlachos and more

Potential Business Impact:

Makes AI faster and use less power.

Business Areas:
Intelligent Systems Artificial Intelligence, Data and Analytics, Science and Engineering

The computational demands of modern Deep Neural Networks (DNNs) are immense and constantly growing. While training costs usually capture public attention, inference demands are also contributing in significant computational, energy and environmental footprints. Sparsity stands out as a critical mechanism for drastically reducing these resource demands. However, its potential remains largely untapped and is not yet fully incorporated in production AI systems. To bridge this gap, this work provides the necessary knowledge and insights for performance engineers keen to get involved in deep learning inference optimization. In particular, in this work we: a) discuss the various forms of sparsity that can be utilized in DNN inference, b) explain how the original dense computations translate to sparse kernels, c) provide an extensive bibliographic review of the state-of-the-art in the implementation of these kernels for CPUs and GPUs, d) discuss the availability of sparse datasets in support of sparsity-related research and development, e) explore the current software tools and frameworks that provide robust sparsity support, and f) present evaluation results of different implementations of the key SpMM and SDDMM kernels on CPU and GPU platforms. Ultimately, this paper aims to serve as a resource for performance engineers seeking to develop and deploy highly efficient sparse deep learning models in productions.

Country of Origin
🇬🇷 Greece

Page Count
78 pages

Category
Computer Science:
Computational Engineering, Finance, and Science