Score: 1

Opal: A Modular Framework for Optimizing Performance using Analytics and LLMs

Published: October 1, 2025 | arXiv ID: 2510.00932v1

By: Mohammad Zaeed, Tanzima Z. Islam, Vladimir Inđić

Potential Business Impact:

Makes computer programs run much faster automatically.

Business Areas:
Application Performance Management Data and Analytics, Software

Large Language Models (LLMs) show promise for automated code optimization but struggle without performance context. This work introduces Opal, a modular framework that connects performance analytics insights with the vast body of published by guiding LLMs to generate informed, trustworthy optimizations. Unlike traditional performance tools that identify bottlenecks but stop short of actionable suggestions, Opal bridges this long-standing gap by linking dynamic insights from hardware counters and Roofline analysis to stall events to optimization decisions. We evaluate Opal across 1640 experiments on real-world GPU kernels and find that in over 98.5% of cases, even a single insight source yields speedups, ranging on average from 19.34% to 52.3%. Our prompt template produced correct code in all but one case, where a vague diagnostic caused an unsafe suggestion. By automatically optimizing GPU kernels using performance analytics and LLMs, Opal marks a leap toward democratizing expert-level performance engineering for all.

Country of Origin
🇷🇸 🇺🇸 Serbia, United States

Page Count
12 pages

Category
Computer Science:
Performance