Designing Empirical Studies on LLM-Based Code Generation: Towards a Reference Framework
By: Nathalia Nascimento, Everton Guimaraes, Paulo Alencar
Potential Business Impact:
Makes AI code writing easier to test fairly.
The rise of large language models (LLMs) has introduced transformative potential in automated code generation, addressing a wide range of software engineering challenges. However, empirical evaluation of LLM-based code generation lacks standardization, with studies varying widely in goals, tasks, and metrics, which limits comparability and reproducibility. In this paper, we propose a theoretical framework for designing and reporting empirical studies on LLM-based code generation. The framework is grounded in both our prior experience conducting such experiments and a comparative analysis of key similarities and differences among recent studies. It organizes evaluation around core components such as problem sources, quality attributes, and metrics, supporting structured and systematic experimentation. We demonstrate its applicability through representative case mappings and identify opportunities for refinement. Looking forward, we plan to evolve the framework into a more robust and mature tool for standardizing LLM evaluation across software engineering contexts.
Similar Papers
Evaluation Guidelines for Empirical Studies in Software Engineering involving LLMs
Software Engineering
Makes computer research with AI easier to check.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.
Guidelines for Empirical Studies in Software Engineering involving Large Language Models
Software Engineering
Makes computer studies easier to check and repeat.