Automatically Generating Questions About Scratch Programs
By: Florian Obermüller, Gordon Fraser
Potential Business Impact:
Helps teachers check if students understand code.
When learning to program, students are usually assessed based on the code they wrote. However, the mere completion of a programming task does not guarantee actual comprehension of the underlying concepts. Asking learners questions about the code they wrote has therefore been proposed as a means to assess program comprehension. As creating targeted questions for individual student programs can be tedious and challenging, prior work has proposed to generate such questions automatically. In this paper we generalize this idea to the block-based programming language Scratch. We propose a set of 30 different questions for Scratch code covering an established program comprehension model, and extend the LitterBox static analysis tool to automatically generate corresponding questions for a given Scratch program. On a dataset of 600,913 projects we generated 54,118,694 questions automatically. Our initial experiments with 34 ninth graders demonstrate that this approach can indeed generate meaningful questions for Scratch programs, and we find that the ability of students to answer these questions on their programs relates to their overall performance.
Similar Papers
ViScratch: Using Large Language Models and Gameplay Videos for Automated Feedback in Scratch
Software Engineering
Fixes coding mistakes by watching and reading.
LitterBox+: An Extensible Framework for LLM-enhanced Scratch Static Code Analysis
Software Engineering
Helps kids use smart AI to code games.
Focusing on Students, not Machines: Grounded Question Generation and Automated Answer Grading
Computation and Language
Makes homework and tests easier for teachers.