AI-Generated Code Is Not Reproducible (Yet): An Empirical Study of Dependency Gaps in LLM-Based Coding Agents
By: Bhanu Prakash Vangala , Ali Adibifar , Tanu Malik and more
The rise of Large Language Models (LLMs) as coding agents promises to accelerate software development, but their impact on generated code reproducibility remains largely unexplored. This paper presents an empirical study investigating whether LLM-generated code can be executed successfully in a clean environment with only OS packages and using only the dependencies that the model specifies. We evaluate three state-of-the-art LLM coding agents (Claude Code, OpenAI Codex, and Gemini) across 300 projects generated from 100 standardized prompts in Python, JavaScript, and Java. We introduce a three-layer dependency framework (distinguishing between claimed, working, and runtime dependencies) to quantify execution reproducibility. Our results show that only 68.3% of projects execute out-of-the-box, with substantial variation across languages (Python 89.2%, Java 44.0%). We also find a 13.5 times average expansion from declared to actual runtime dependencies, revealing significant hidden dependencies.
Similar Papers
Agent-based code generation for the Gammapy framework
Software Engineering
Helps scientists write computer code for research.
A Self-Improving Coding Agent
Artificial Intelligence
Computers fix themselves to do tasks better.
From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence
Software Engineering
Helps computers write computer programs from words.