Score: 1

Library Liberation: Competitive Performance Matmul Through Compiler-composed Nanokernels

Published: November 14, 2025 | arXiv ID: 2511.13764v1

By: Arun Thangamani , Md Asghar Ahmad Shahid , Adam Siemieniuk and more

Potential Business Impact:

Makes AI run faster on computers automatically.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The rapidly evolving landscape of AI and machine learning workloads has widened the gap between high-level domain operations and efficient hardware utilization. Achieving near-peak performance still demands deep hardware expertise-experts either handcraft target-specific kernels (e.g., DeepSeek) or rely on specialized libraries (e.g., CUTLASS)-both of which add complexity and limit scalability for most ML practitioners. This paper introduces a compilation scheme that automatically generates scalable, high-performance microkernels by leveraging the MLIR dialects to bridge domain-level operations and processor capabilities. Our approach removes dependence on low-level libraries by enabling the compiler to auto-generate near-optimal code directly. At its core is a mechanism for composing nanokernels from low-level IR constructs with near-optimal register utilization, forming efficient microkernels tailored to each target. We implement this technique in an MLIR-based compiler supporting both vector and tile based CPU instructions. Experiments show that the generated nanokernels are of production-quality, and competitive with state-of-the-art microkernel libraries.

Page Count
23 pages

Category
Computer Science:
Machine Learning (CS)