WIP: Leveraging LLMs for Enforcing Design Principles in Student Code: Analysis of Prompting Strategies and RAG
By: Dhruv Kolhatkar, Soubhagya Akkena, Edward F. Gehringer
Potential Business Impact:
Helps students write better computer code.
This work-in-progress research-to-practice paper explores the integration of Large Language Models (LLMs) into the code-review process for open-source software projects developed in computer science and software engineering courses. The focus is on developing an automated feedback tool that evaluates student code for adherence to key object-oriented design principles, addressing the need for more effective and scalable methods to teach software design best practices. The innovative practice involves leveraging LLMs and Retrieval-Augmented Generation (RAG) to create an automated feedback system that assesses student code for principles like SOLID, DRY, and design patterns. It analyzes the effectiveness of various prompting strategies and the RAG integration. Preliminary findings show promising improvements in code quality. Future work will aim to improve model accuracy and expand support for additional design principles.
Similar Papers
Augmenting Large Language Models with Static Code Analysis for Automated Code Quality Improvements
Software Engineering
Fixes computer code bugs automatically and faster.
On Automating Security Policies with Contemporary LLMs
Cryptography and Security
Automates computer defenses against online attacks.
Using LLMs and Essence to Support Software Practice Adoption
Software Engineering
Helps software teams follow best practices.