PATS: Personality-Aware Teaching Strategies with Large Language Model Tutors
By: Donya Rooein , Sankalan Pal Chowdhury , Mariia Eremeeva and more
Recent advances in large language models (LLMs) demonstrate their potential as educational tutors. However, different tutoring strategies benefit different student personalities, and mismatches can be counterproductive to student outcomes. Despite this, current LLM tutoring systems do not take into account student personality traits. To address this problem, we first construct a taxonomy that links pedagogical methods to personality profiles, based on pedagogical literature. We simulate student-teacher conversations and use our framework to let the LLM tutor adjust its strategy to the simulated student personality. We evaluate the scenario with human teachers and find that they consistently prefer our approach over two baselines. Our method also increases the use of less common, high-impact strategies such as role-playing, which human and LLM annotators prefer significantly. Our findings pave the way for developing more personalized and effective LLM use in educational applications.
Similar Papers
Teaching According to Students' Aptitude: Personalized Mathematics Tutoring via Persona-, Memory-, and Forgetting-Aware LLMs
Computation and Language
Teaches math better by remembering what you forget.
Educators' Perceptions of Large Language Models as Tutors: Comparing Human and AI Tutors in a Blind Text-only Setting
Emerging Technologies
AI tutors teach math better than humans.
Investigating Student Interaction Patterns with Large Language Model-Powered Course Assistants in Computer Science Courses
Computers and Society
Helps students get homework help anytime, anywhere.