The Conversational Exam: A Scalable Assessment Design for the AI Era
By: Lorena A. Barba, Laura Stegner
Potential Business Impact:
Tests students' real understanding, not AI's.
Traditional assessment methods collapse when students use generative AI to complete work without genuine engagement, creating an illusion of competence where they believe they're learning but aren't. This paper presents the conversational exam -- a scalable oral examination format that restores assessment validity by having students code live while explaining their reasoning. Drawing on human-computer interaction principles, we examined 58 students in small groups across just two days, demonstrating that oral exams can scale to typical class sizes. The format combines authentic practice (students work with documentation and supervised AI access) with inherent validity (real-time performance cannot be faked). We provide detailed implementation guidance to help instructors adapt this approach, offering a practical path forward when many educators feel paralyzed between banning AI entirely or accepting that valid assessment is impossible.
Similar Papers
Assessing the Quality of AI-Generated Exams: A Large-Scale Field Study
Computers and Society
AI makes better tests for students and teachers.
Beyond Static Scoring: Enhancing Assessment Validity via AI-Generated Interactive Verification
Computers and Society
Helps teachers check if students really learned.
Towards Embodied Conversational Agents for Reducing Oral Exam Anxiety in Extended Reality
Human-Computer Interaction
Practice oral exams with a friendly robot.