LitterBox+: An Extensible Framework for LLM-enhanced Scratch Static Code Analysis
By: Benedikt Fein, Florian Obermüller, Gordon Fraser
Potential Business Impact:
Helps kids use smart AI to code games.
Large language models (LLMs) have become an essential tool to support developers using traditional text-based programming languages, but the graphical notation of the block-based Scratch programming environment inhibits the use of LLMs. To overcome this limitation, we propose the LitterBox+ framework that extends the Scratch static code analysis tool LitterBox with the generative abilities of LLMs. By converting block-based code to a textual representation suitable for LLMs, LitterBox+ allows users to query LLMs about their programs, about quality issues reported by LitterBox, and it allows generating code fixes. Besides offering a programmatic API for these functionalities, LitterBox+ also extends the Scratch user interface to make these functionalities available directly in the environment familiar to learners. The framework is designed to be easily extensible with other prompts, LLM providers, and new features combining the program analysis capabilities of LitterBox with the generative features of LLMs. We provide a screencast demonstrating the tool at https://youtu.be/RZ6E0xgrIgQ.
Similar Papers
ViScratch: Using Large Language Models and Gameplay Videos for Automated Feedback in Scratch
Software Engineering
Fixes coding mistakes by watching and reading.
Automatically Generating Questions About Scratch Programs
Software Engineering
Helps teachers check if students understand code.
Static Analysis as a Feedback Loop: Enhancing LLM-Generated Code Beyond Correctness
Software Engineering
Makes computer code safer and easier to read.