Tutoring LLM into a Better CUDA Optimizer
By: Matyáš Brabec , Jiří Klepl , Michal Töpfer and more
Potential Business Impact:
Helps computers write faster code for tasks.
Recent leaps in large language models (LLMs) caused a revolution in programming tools (like GitHub Copilot) that can help with code generation, debugging, and even performance optimization. In this paper, we focus on the capabilities of the most recent reasoning models to generate optimized CUDA code for predefined, well-known tasks. Our objective is to determine which types of code optimizations and parallel patterns the LLMs can perform by themselves and whether they can be improved by tutoring (providing more detailed hints and guidelines in the prompt). The generated solutions were evaluated both automatically (for correctness and speedup) and manually (code reviews) to provide a more detailed perspective. We also tried an interactive approach where the LLM can fix its previous mistakes within a session. The results indicate that LLMs are quite skilled coders; however, they require tutoring to reach optimized solutions provided by parallel computing experts.
Similar Papers
Evaluating Large Language Models for Workload Mapping and Scheduling in Heterogeneous HPC Systems
Distributed, Parallel, and Cluster Computing
Lets computers solve hard scheduling puzzles from words.
From Large to Small: Transferring CUDA Optimization Expertise via Reasoning Graph
Machine Learning (CS)
Makes small AI write fast computer code.
Evaluating Code Generation of LLMs in Advanced Computer Science Problems
Artificial Intelligence
Helps computers write harder code, but not perfectly.