An Experimental Study of Real-Life LLM-Proposed Performance Improvements
By: Lirong Yi, Gregory Gay, Philipp Leitner
Potential Business Impact:
Computers write faster code, but humans write best.
Large Language Models (LLMs) can generate code, but can they generate fast code? In this paper, we study this question using a dataset of 65 real-world tasks mined from open-source Java programs. We specifically select tasks where developers achieved significant speedups, and employ an automated pipeline to generate patches for these issues using two leading LLMs under four prompt variations. By rigorously benchmarking the results against the baseline and human-authored solutions, we demonstrate that LLM-generated code indeed improves performance over the baseline in most cases. However, patches proposed by human developers outperform LLM fixes by a statistically significant margin, indicating that LLMs often fall short of finding truly optimal solutions. We further find that LLM solutions are semantically identical or similar to the developer optimization idea in approximately two-thirds of cases, whereas they propose a more original idea in the remaining one-third. However, these original ideas only occasionally yield substantial performance gains.
Similar Papers
An Empirical Study of LLM-Based Code Clone Detection
Software Engineering
Helps computers find similar code, but not always.
Model-Assisted and Human-Guided: Perceptions and Practices of Software Professionals Using LLMs for Coding
Software Engineering
Helps coders build software faster and smarter.
Beyond Synthetic Benchmarks: Evaluating LLM Performance on Real-World Class-Level Code Generation
Software Engineering
Computers struggle to write real code.