Small Models, Big Support: A Local LLM Framework for Teacher-Centric Content Creation and Assessment using RAG and CAG
By: Zarreen Reza , Alexander Mazur , Michael T. Dugdale and more
Potential Business Impact:
Helps teachers make custom lessons safely and cheaply.
While Large Language Models (LLMs) are increasingly utilized as student-facing educational aids, their potential to directly support educators, particularly through locally deployable and customizable open-source solutions, remains significantly underexplored. Many existing educational solutions rely on cloud-based infrastructure or proprietary tools, which are costly and may raise privacy concerns. Regulated industries with limited budgets require affordable, self-hosted solutions. We introduce an end-to-end, open-source framework leveraging small (3B-7B parameters), locally deployed LLMs for customized teaching material generation and assessment. Our system uniquely incorporates an interactive loop crucial for effective small-model refinement, and an auxiliary LLM verifier to mitigate jailbreaking risks, enhancing output reliability and safety. Utilizing Retrieval and Context Augmented Generation (RAG/CAG), it produces factually accurate, customized pedagogically-styled content. Deployed on-premises for data privacy and validated through an evaluation pipeline and a college physics pilot, our findings show that carefully engineered small LLM systems can offer robust, affordable, practical, and safe educator support, achieving utility comparable to larger models for targeted tasks.
Similar Papers
Towards an Efficient, Customizable, and Accessible AI Tutor
Computers and Society
AI tutors work without internet for all students.
5G Network Automation Using Local Large Language Models and Retrieval-Augmented Generation
Networking and Internet Architecture
Automates network setup, keeping your data private.
Large Language Models for Explainable Threat Intelligence
Computation and Language
Finds computer dangers and shows how it knows.