A High-Level Compiler Integration Approach for Deep Learning Accelerators Supporting Abstraction and Optimization
By: Samira Ahmadifarsani, Daniel Mueller-Gritschneder, Ulf Schlichtmann
Potential Business Impact:
Lets computers use new chips faster.
The growing adoption of domain-specific architectures in edge computing platforms for deep learning has highlighted the efficiency of hardware accelerators. However, integrating custom accelerators into modern machine learning (ML) compilers remains a complex challenge due to the need for significant modifications in compilation layers and specialized scheduling techniques. Existing frameworks offer partial solutions and require users to navigate intricate compiler internals. In this paper, we introduce a TVM-based compilation integration approach that targets GEMM-based deep learning accelerators. Our approach abstracts the complexities of compiler integration, enabling seamless integration of accelerators without requiring in-depth knowledge of the underlying compiler. Furthermore, we extend and incorporate design space exploration tools, specifically CoSA, to automate efficient tensor scheduling, accounting for factors such as uneven mapping and double buffering. Our framework is benchmarked on the Gemmini accelerator, demonstrating performance comparable to its specialized manually implemented toolchain.
Similar Papers
A Multi-level Compiler Backend for Accelerated Micro-kernels Targeting RISC-V ISA Extensions
Programming Languages
Makes AI run much faster on new chips.
Autocomp: LLM-Driven Code Optimization for Tensor Accelerators
Programming Languages
Makes computer chips run programs much faster.
Leveraging Neural Graph Compilers in Machine Learning Research for Edge-Cloud Systems
Distributed, Parallel, and Cluster Computing
Makes AI run faster on different computers.