Score: 1

ViScratch: Using Large Language Models and Gameplay Videos for Automated Feedback in Scratch

Published: September 14, 2025 | arXiv ID: 2509.11065v1

By: Yuan Si , Daming Li , Hanyuan Shi and more

Potential Business Impact:

Fixes coding mistakes by watching and reading.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Block-based programming environments such as Scratch are increasingly popular in programming education, in particular for young learners. While the use of blocks helps prevent syntax errors, semantic bugs remain common and difficult to debug. Existing tools for Scratch debugging rely heavily on predefined rules or user manual inputs, and crucially, they ignore the platform's inherently visual nature. We introduce ViScratch, the first multimodal feedback generation system for Scratch that leverages both the project's block code and its generated gameplay video to diagnose and repair bugs. ViScratch uses a two-stage pipeline: a vision-language model first aligns visual symptoms with code structure to identify a single critical issue, then proposes minimal, abstract syntax tree level repairs that are verified via execution in the Scratch virtual machine. We evaluate ViScratch on a set of real-world Scratch projects against state-of-the-art LLM-based tools and human testers. Results show that gameplay video is a crucial debugging signal: ViScratch substantially outperforms prior tools in both bug identification and repair quality, even without access to project descriptions or goals. This work demonstrates that video can serve as a first-class specification in visual programming environments, opening new directions for LLM-based debugging beyond symbolic code alone.

Country of Origin
🇨🇦 Canada

Page Count
20 pages

Category
Computer Science:
Software Engineering