Artificial Intelligence-Powered Assessment Framework for Skill-Oriented Engineering Lab Education
By: Vaishnavi Sharma , Rakesh Thakur , Shashwat Sharma and more
Potential Business Impact:
Makes computer classes fairer and more helpful.
Practical lab education in computer science often faces challenges such as plagiarism, lack of proper lab records, unstructured lab conduction, inadequate execution and assessment, limited practical learning, low student engagement, and absence of progress tracking for both students and faculties, resulting in graduates with insufficient hands-on skills. In this paper, we introduce AsseslyAI, which addresses these challenges through online lab allocation, a unique lab problem for each student, AI-proctored viva evaluations, and gamified simulators to enhance engagement and conceptual mastery. While existing platforms generate questions based on topics, our framework fine-tunes on a 10k+ question-answer dataset built from AI/ML lab questions to dynamically generate diverse, code-rich assessments. Validation metrics show high question-answer similarity, ensuring accurate answers and non-repetitive questions. By unifying dataset-driven question generation, adaptive difficulty, plagiarism resistance, and evaluation in a single pipeline, our framework advances beyond traditional automated grading tools and offers a scalable path to produce genuinely skilled graduates.
Similar Papers
Hybrid Instructor Ai Assessment In Academic Projects: Efficiency, Equity, And Methodological Lessons
Computers and Society
AI helps teachers grade student reports faster, better.
AI-Driven Grading and Moderation for Collaborative Projects in Computer Science Education
Human-Computer Interaction
Grades students fairly in group projects.
Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study
Computers and Society
AI makes better tests for students and teachers.