Guidelines to Prompt Large Language Models for Code Generation: An Empirical Characterization
By: Alessandro Midolo , Alessandro Giagnorio , Fiorella Zampetti and more
Potential Business Impact:
Teaches computers to write better code.
Large Language Models (LLMs) are nowadays extensively used for various types of software engineering tasks, primarily code generation. Previous research has shown how suitable prompt engineering could help developers in improving their code generation prompts. However, so far, there do not exist specific guidelines driving developers towards writing suitable prompts for code generation. In this work, we derive and evaluate development-specific prompt optimization guidelines. First, we use an iterative, test-driven approach to automatically refine code generation prompts, and we analyze the outcome of this process to identify prompt improvement items that lead to test passes. We use such elements to elicit 10 guidelines for prompt improvement, related to better specifying I/O, pre-post conditions, providing examples, various types of details, or clarifying ambiguities. We conduct an assessment with 50 practitioners, who report their usage of the elicited prompt improvement patterns, as well as their perceived usefulness, which does not always correspond to the actual usage before knowing our guidelines. Our results lead to implications not only for practitioners and educators, but also for those aimed at creating better LLM-aided software development tools.
Similar Papers
Reporting LLM Prompting in Automated Software Engineering: A Guideline Based on Current Practices and Expectations
Software Engineering
Makes AI software help more reliable and understandable.
Prompt Engineering Guidelines for Using Large Language Models in Requirements Engineering
Software Engineering
Helps AI write better computer instructions.
Evaluating Large Language Models for Code Translation: Effects of Prompt Language and Prompt Design
Software Engineering
Helps computers rewrite code between languages.