Score: 2

Tutoring LLM into a Better CUDA Optimizer

Published: October 19, 2025 | arXiv ID: 2510.16933v1

By: Matyáš Brabec , Jiří Klepl , Michal Töpfer and more

Potential Business Impact:

Helps computers write faster code for tasks.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent leaps in large language models (LLMs) caused a revolution in programming tools (like GitHub Copilot) that can help with code generation, debugging, and even performance optimization. In this paper, we focus on the capabilities of the most recent reasoning models to generate optimized CUDA code for predefined, well-known tasks. Our objective is to determine which types of code optimizations and parallel patterns the LLMs can perform by themselves and whether they can be improved by tutoring (providing more detailed hints and guidelines in the prompt). The generated solutions were evaluated both automatically (for correctness and speedup) and manually (code reviews) to provide a more detailed perspective. We also tried an interactive approach where the LLM can fix its previous mistakes within a session. The results indicate that LLMs are quite skilled coders; however, they require tutoring to reach optimized solutions provided by parallel computing experts.

Country of Origin
🇨🇿 Czech Republic

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing