Large Language Models for Software Engineering: A Reproducibility Crisis
By: Mohammed Latif Siddiq , Arvin Islam-Gomes , Natalie Sekerak and more
Potential Business Impact:
Makes science experiments with AI easier to repeat.
Reproducibility is a cornerstone of scientific progress, yet its state in large language model (LLM)-based software engineering (SE) research remains poorly understood. This paper presents the first large-scale, empirical study of reproducibility practices in LLM-for-SE research. We systematically mined and analyzed 640 papers published between 2017 and 2025 across premier software engineering, machine learning, and natural language processing venues, extracting structured metadata from publications, repositories, and documentation. Guided by four research questions, we examine (i) the prevalence of reproducibility smells, (ii) how reproducibility has evolved over time, (iii) whether artifact evaluation badges reliably reflect reproducibility quality, and (iv) how publication venues influence transparency practices. Using a taxonomy of seven smell categories: Code and Execution, Data, Documentation, Environment and Tooling, Versioning, Model, and Access and Legal, we manually annotated all papers and associated artifacts. Our analysis reveals persistent gaps in artifact availability, environment specification, versioning rigor, and documentation clarity, despite modest improvements in recent years and increased adoption of artifact evaluation processes at top SE venues. Notably, we find that badges often signal artifact presence but do not consistently guarantee execution fidelity or long-term reproducibility. Motivated by these findings, we provide actionable recommendations to mitigate reproducibility smells and introduce a Reproducibility Maturity Model (RMM) to move beyond binary artifact certification toward multi-dimensional, progressive evaluation of reproducibility rigor.
Similar Papers
Reflections on the Reproducibility of Commercial LLM Performance in Empirical Software Engineering Studies
Software Engineering
Makes computer learning results easier to check.
Reflecting on Empirical and Sustainability Aspects of Software Engineering Research in the Era of Large Language Models
Software Engineering
Improves how we test and use AI in computer programs.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.