Toward Automated Qualitative Analysis: Leveraging Large Language Models for Tutoring Dialogue Evaluation
By: Megan Gu , Chloe Qianhui Zhao , Claire Liu and more
Potential Business Impact:
Helps computers judge good teaching methods.
Our study introduces an automated system leveraging large language models (LLMs) to assess the effectiveness of five key tutoring strategies: 1. giving effective praise, 2. reacting to errors, 3. determining what students know, 4. helping students manage inequity, and 5. responding to negative self-talk. Using a public dataset from the Teacher-Student Chatroom Corpus, our system classifies each tutoring strategy as either being employed as desired or undesired. Our study utilizes GPT-3.5 with few-shot prompting to assess the use of these strategies and analyze tutoring dialogues. The results show that for the five tutoring strategies, True Negative Rates (TNR) range from 0.655 to 0.738, and Recall ranges from 0.327 to 0.432, indicating that the model is effective at excluding incorrect classifications but struggles to consistently identify the correct strategy. The strategy \textit{helping students manage inequity} showed the highest performance with a TNR of 0.738 and Recall of 0.432. The study highlights the potential of LLMs in tutoring strategy analysis and outlines directions for future improvements, including incorporating more advanced models for more nuanced feedback.
Similar Papers
Training LLM-based Tutors to Improve Student Learning Outcomes in Dialogues
Computation and Language
Teaches students better by guessing right answers.
Exploring LLMs for Predicting Tutor Strategy and Student Outcomes in Dialogues
Computation and Language
AI tutors learn how to teach better.
Beyond Final Answers: Evaluating Large Language Models for Math Tutoring
Human-Computer Interaction
Helps computers teach math, but they make mistakes.