Simulated Students in Tutoring Dialogues: Substance or Illusion?
By: Alexander Scarlatos , Jaewook Lee , Simon Woodhead and more
Potential Business Impact:
Makes AI tutors learn from fake students better.
Advances in large language models (LLMs) enable many new innovations in education. However, evaluating the effectiveness of new technology requires real students, which is time-consuming and hard to scale up. Therefore, many recent works on LLM-powered tutoring solutions have used simulated students for both training and evaluation, often via simple prompting. Surprisingly, little work has been done to ensure or even measure the quality of simulated students. In this work, we formally define the student simulation task, propose a set of evaluation metrics that span linguistic, behavioral, and cognitive aspects, and benchmark a wide range of student simulation methods on these metrics. We experiment on a real-world math tutoring dialogue dataset, where both automated and human evaluation results show that prompting strategies for student simulation perform poorly; supervised fine-tuning and preference optimization yield much better but still limited performance, motivating future work on this challenging task.
Similar Papers
Simulating Students with Large Language Models: A Review of Architecture, Mechanisms, and Role Modelling in Education with Generative AI
Computers and Society
Lets computers act like students to test teaching.
Which Type of Students can LLMs Act? Investigating Authentic Simulation with Graph-based Human-AI Collaborative System
Computers and Society
Creates smart student helpers for learning research.
Exploring LLM-based Student Simulation for Metacognitive Cultivation
Computers and Society
Teaches computers to help students learn better.