Autocomp: LLM-Driven Code Optimization for Tensor Accelerators
By: Charles Hong , Sahil Bhatia , Alvin Cheung and more
Potential Business Impact:
Makes computer chips run programs much faster.
Hardware accelerators, especially those designed for tensor processing, have become ubiquitous in today's computing landscape. However, even with significant efforts in building compilers, programming these tensor accelerators remains challenging, leaving much of their potential underutilized. Recently, large language models (LLMs), trained on large amounts of code, have shown significant promise in code generation and optimization tasks, but generating low-resource languages like specialized tensor accelerator code still poses a significant challenge. We tackle this challenge with Autocomp, an approach that empowers accelerator programmers to leverage domain knowledge and hardware feedback to optimize code via an automated LLM-driven search. We accomplish this by: 1) formulating each optimization pass as a structured two-phase prompt, divided into planning and code generation phases, 2) inserting domain knowledge during planning via a concise and adaptable optimization menu, and 3) integrating correctness and performance metrics from hardware as feedback at each search iteration. Across three categories of representative workloads and two different accelerators, we demonstrate that Autocomp-optimized code runs 5.6x (GEMM) and 2.7x (convolution) faster than the vendor-provided library, and outperforms expert-level hand-tuned code by 1.4x (GEMM), 1.1x (convolution), and 1.3x (fine-grained linear algebra). Additionally, we demonstrate that optimization schedules generated from Autocomp can be reused across similar tensor operations, improving speedups by up to 24% under a fixed sample budget.
Similar Papers
Agentic Auto-Scheduling: An Experimental Study of LLM-Guided Loop Optimization
Programming Languages
Makes computer programs run much faster.
A High-Level Compiler Integration Approach for Deep Learning Accelerators Supporting Abstraction and Optimization
Machine Learning (CS)
Lets computers use new chips faster.
Agentic Auto-Scheduling: An Experimental Study of LLM-Guided Loop Optimization
Programming Languages
Makes computer programs run much faster.