Library Liberation: Competitive Performance Matmul Through Compiler-composed Nanokernels
By: Arun Thangamani , Md Asghar Ahmad Shahid , Adam Siemieniuk and more
Potential Business Impact:
Makes AI run faster on computers automatically.
The rapidly evolving landscape of AI and machine learning workloads has widened the gap between high-level domain operations and efficient hardware utilization. Achieving near-peak performance still demands deep hardware expertise-experts either handcraft target-specific kernels (e.g., DeepSeek) or rely on specialized libraries (e.g., CUTLASS)-both of which add complexity and limit scalability for most ML practitioners. This paper introduces a compilation scheme that automatically generates scalable, high-performance microkernels by leveraging the MLIR dialects to bridge domain-level operations and processor capabilities. Our approach removes dependence on low-level libraries by enabling the compiler to auto-generate near-optimal code directly. At its core is a mechanism for composing nanokernels from low-level IR constructs with near-optimal register utilization, forming efficient microkernels tailored to each target. We implement this technique in an MLIR-based compiler supporting both vector and tile based CPU instructions. Experiments show that the generated nanokernels are of production-quality, and competitive with state-of-the-art microkernel libraries.
Similar Papers
From Loop Nests to Silicon: Mapping AI Workloads onto AMD NPUs with MLIR-AIR
Computation and Language
Makes computers use special chips much faster.
LAPIS: A Performance Portable, High Productivity Compiler Framework
Distributed, Parallel, and Cluster Computing
Lets computers run science and AI programs easily.
QiMeng-Kernel: Macro-Thinking Micro-Coding Paradigm for LLM-Based High-Performance GPU Kernel Generation
Distributed, Parallel, and Cluster Computing
Makes computers write faster code for AI.