Score: 1

Agentic Auto-Scheduling: An Experimental Study of LLM-Guided Loop Optimization

Published: November 1, 2025 | arXiv ID: 2511.00592v1

By: Massinissa Merouani, Islem Kara Bernou, Riyadh Baghdadi

Potential Business Impact:

Makes computer programs run much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Automatic code optimization remains a difficult challenge, particularly for complex loop nests on modern hardware. This paper investigates a novel approach to code optimization where Large Language Models (LLMs) guide the process through a closed-loop interaction with a compiler. We present ComPilot, an experimental framework that leverages off-the-shelf LLMs, without any task-specific fine-tuning, as interactive optimization agents. ComPilot establishes a feedback loop where an LLM proposes transformations for a given loop nest to a compiler. The compiler attempts the transformations, reporting back legality status and measured speedup or slowdown. The LLM utilizes this concrete feedback to iteratively refine its optimization strategy. Our extensive evaluation across the PolyBench benchmark suite demonstrates the effectiveness of this zero-shot approach. ComPilot achieves geometric mean speedups of 2.66x (single run) and 3.54x (best-of-5 runs) over the original code. Furthermore, ComPilot demonstrates competitive performance against the state-of-the-art Pluto polyhedral optimizer, outperforming it in many cases. This experimental study demonstrates that general-purpose LLMs can effectively guide the code optimization process when grounded by compiler feedback, opening promising research directions for agentic AI in code optimization.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Programming Languages