Interleaving Natural Language Prompting with Code Editing for Solving Programming Tasks with Generative AI Models
By: Victor-Alexandru Pădurean , Paul Denny , Andrew Luxton-Reilly and more
Potential Business Impact:
Helps students learn coding by fixing mistakes.
Nowadays, computing students often rely on both natural-language prompting and manual code editing to solve programming tasks. Yet we still lack a clear understanding of how these two modes are combined in practice, and how their usage varies with task complexity and student ability. In this paper, we investigate this through a large-scale study in an introductory programming course, collecting 13,305 interactions from 355 students during a three-day laboratory activity. Our analysis shows that students primarily use prompting to generate initial solutions, and then often enter short edit-run loops to refine their code following a failed execution. We find that manual editing becomes more frequent as task complexity increases, but most edits remain concise, with many affecting a single line of code. Higher-performing students tend to succeed using prompting alone, while lower-performing students rely more on edits. Student reflections confirm that prompting is helpful for structuring solutions, editing is effective for making targeted corrections, while both are useful for learning. These findings highlight the role of manual editing as a deliberate last-mile repair strategy, complementing prompting in AI-assisted programming workflows.
Similar Papers
Prompt Programming: A Platform for Dialogue-based Computational Problem Solving with Generative AI Models
Computers and Society
Teaches students to talk to computers for coding help.
Exploring Direct Instruction and Summary-Mediated Prompting in LLM-Assisted Code Modification
Software Engineering
Helps computers fix and change computer code.
Prompting in Practice: Investigating Software Developers' Use of Generative AI Tools
Software Engineering
Helps programmers use AI to write better code.