Score: 1

Designing Empirical Studies on LLM-Based Code Generation: Towards a Reference Framework

Published: October 4, 2025 | arXiv ID: 2510.03862v1

By: Nathalia Nascimento, Everton Guimaraes, Paulo Alencar

Potential Business Impact:

Makes AI code writing easier to test fairly.

Business Areas:
Simulation Software

The rise of large language models (LLMs) has introduced transformative potential in automated code generation, addressing a wide range of software engineering challenges. However, empirical evaluation of LLM-based code generation lacks standardization, with studies varying widely in goals, tasks, and metrics, which limits comparability and reproducibility. In this paper, we propose a theoretical framework for designing and reporting empirical studies on LLM-based code generation. The framework is grounded in both our prior experience conducting such experiments and a comparative analysis of key similarities and differences among recent studies. It organizes evaluation around core components such as problem sources, quality attributes, and metrics, supporting structured and systematic experimentation. We demonstrate its applicability through representative case mappings and identify opportunities for refinement. Looking forward, we plan to evolve the framework into a more robust and mature tool for standardizing LLM evaluation across software engineering contexts.

Country of Origin
πŸ‡¨πŸ‡¦ πŸ‡ΊπŸ‡Έ Canada, United States

Page Count
5 pages

Category
Computer Science:
Software Engineering