Who's the Leader? Analyzing Novice Workflows in LLM-Assisted Debugging of Machine Learning Code
By: Jessica Y. Bo , Majeed Kazemitabaar , Emma Zhuang and more
Potential Business Impact:
Helps beginners learn coding without over-relying.
While LLMs are often touted as tools for democratizing specialized knowledge to beginners, their actual effectiveness for improving task performance and learning is still an open question. It is known that novices engage with LLMs differently from experts, with prior studies reporting meta-cognitive pitfalls that affect novices' ability to verify outputs and prompt effectively. We focus on a task domain, machine learning (ML), which embodies both high complexity and low verifiability to understand the impact of LLM assistance on novices. Provided a buggy ML script and open access to ChatGPT, we conduct a formative study with eight novice ML engineers to understand their reliance on, interactions with, and perceptions of the LLM. We find that user actions can be roughly categorized into leading the LLM and led-by the LLM, and further investigate how they affect reliance outcomes like over- and under-reliance. These results have implications on novices' cognitive engagement in LLM-assisted tasks and potential negative effects on downstream learning. Lastly, we pose potential augmentations to the novice-LLM interaction paradigm to promote cognitive engagement.
Similar Papers
Observing Without Doing: Pseudo-Apprenticeship Patterns in Student LLM Use
Human-Computer Interaction
Helps students learn to code without just copying AI.
Designing for Novice Debuggers: A Pilot Study on an AI-Assisted Debugging Tool
Software Engineering
Helps students fix code errors by thinking.
Not Everyone Wins with LLMs: Behavioral Patterns and Pedagogical Implications in AI-assisted Data Analysis
Human-Computer Interaction
Helps students use AI for coding better.