Socratic Students: Teaching Language Models to Learn by Asking Questions
By: Rajeev Bhatt Ambati , Tianyi Niu , Aashu Singh and more
Large Language Models (LLMs) excel at static interactions, where they answer user queries by retrieving knowledge encoded in their parameters. However, in many real-world settings, such as educational tutoring or medical assistance, relevant information is not directly available and must be actively acquired through dynamic interactions. An interactive agent would recognize its own uncertainty, ask targeted questions, and retain new knowledge efficiently. Prior work has primarily explored effective ways for a teacher to instruct the student, where the teacher identifies student gaps and provides guidance. In this work, we shift the focus to the student and investigate effective strategies to actively query the teacher in seeking useful information. Across math and coding benchmarks, where baseline student models begin with near-zero performance, we show that student-led approaches consistently yield absolute Pass@k improvements of at least 0.5 over static baselines. To improve question quality, we train students using Direct Preference Optimization (DPO) with guidance from either self or stronger students. We find that this guided training enables smaller models to learn how to ask better questions, further enhancing learning efficiency.
Similar Papers
SocraticAI: Transforming LLMs into Guided CS Tutors Through Scaffolded Interaction
Computers and Society
Teaches students to use AI smartly for learning.
Learning by Teaching: Engaging Students as Instructors of Large Language Models in Computer Science Education
Computers and Society
Students teach computers, learn better.
Exploring Conversational Design Choices in LLMs for Pedagogical Purposes: Socratic and Narrative Approaches for Improving Instructor's Teaching Practice
Human-Computer Interaction
Helps teachers learn to use AI better.