Opal: A Modular Framework for Optimizing Performance using Analytics and LLMs
By: Mohammad Zaeed, Tanzima Z. Islam, Vladimir Inđić
Potential Business Impact:
Makes computer programs run much faster automatically.
Large Language Models (LLMs) show promise for automated code optimization but struggle without performance context. This work introduces Opal, a modular framework that connects performance analytics insights with the vast body of published by guiding LLMs to generate informed, trustworthy optimizations. Unlike traditional performance tools that identify bottlenecks but stop short of actionable suggestions, Opal bridges this long-standing gap by linking dynamic insights from hardware counters and Roofline analysis to stall events to optimization decisions. We evaluate Opal across 1640 experiments on real-world GPU kernels and find that in over 98.5% of cases, even a single insight source yields speedups, ranging on average from 19.34% to 52.3%. Our prompt template produced correct code in all but one case, where a vague diagnostic caused an unsafe suggestion. By automatically optimizing GPU kernels using performance analytics and LLMs, Opal marks a leap toward democratizing expert-level performance engineering for all.
Similar Papers
LLMPerf: GPU Performance Modeling meets Large Language Models
Performance
Lets computers guess how fast programs will run.
Do Large Language Models Understand Performance Optimization?
Distributed, Parallel, and Cluster Computing
Computers write faster, but sometimes make mistakes.
Beyond Single LLMs: Enhanced Code Generation via Multi-Stage Performance-Guided LLM Orchestration
Software Engineering
Makes AI write better computer code, faster.