How Do Agents Perform Code Optimization? An Empirical Study
By: Huiyun Peng , Antonio Zhong , Ricardo Andrés Calvo Méndez and more
Performance optimization is a critical yet challenging aspect of software development, often requiring a deep understanding of system behavior, algorithmic tradeoffs, and careful code modifications. Although recent advances in AI coding agents have accelerated code generation and bug fixing, little is known about how these agents perform on real-world performance optimization tasks. We present the first empirical study comparing agent- and human-authored performance optimization commits, analyzing 324 agent-generated and 83 human-authored PRs from the AIDev dataset across adoption, maintainability, optimization patterns, and validation practices. We find that AI-authored performance PRs are less likely to include explicit performance validation than human-authored PRs (45.7\% vs. 63.6\%, $p=0.007$). In addition, AI-authored PRs largely use the same optimization patterns as humans. We further discuss limitations and opportunities for advancing agentic code optimization.
Similar Papers
Agentic Refactoring: An Empirical Study of AI Coding Agents
Software Engineering
Makes computer code cleaner and easier to fix.
On the Use of Agentic Coding: An Empirical Study of Pull Requests on GitHub
Software Engineering
AI helps programmers fix code, saving them time.
A Study of Library Usage in Agent-Authored Pull Requests
Software Engineering
AI coding helpers use many tools wisely.