"Check My Work?": Measuring Sycophancy in a Simulated Educational Context
By: Chuck Arvin
Potential Business Impact:
AI agrees with students, even when wrong.
This study examines how user-provided suggestions affect Large Language Models (LLMs) in a simulated educational context, where sycophancy poses significant risks. Testing five different LLMs from the OpenAI GPT-4o and GPT-4.1 model classes across five experimental conditions, we show that response quality varies dramatically based on query framing. In cases where the student mentions an incorrect answer, the LLM correctness can degrade by as much as 15 percentage points, while mentioning the correct answer boosts accuracy by the same margin. Our results also show that this bias is stronger in smaller models, with an effect of up to 30% for the GPT-4.1-nano model, versus 8% for the GPT-4o model. Our analysis of how often LLMs "flip" their answer, and an investigation into token level probabilities, confirm that the models are generally changing their answers to answer choices mentioned by students in line with the sycophancy hypothesis. This sycophantic behavior has important implications for educational equity, as LLMs may accelerate learning for knowledgeable students while the same tools may reinforce misunderstanding for less knowledgeable students. Our results highlight the need to better understand the mechanism, and ways to mitigate, such bias in the educational context.
Similar Papers
Sycophancy Claims about Language Models: The Missing Human-in-the-Loop
Computation and Language
Makes AI agree with you, even when wrong.
Invisible Saboteurs: Sycophantic LLMs Mislead Novices in Problem-Solving Tasks
Human-Computer Interaction
Makes AI less likely to agree with you wrongly.
When Truth Is Overridden: Uncovering the Internal Origins of Sycophancy in Large Language Models
Computation and Language
Makes AI agree with you, even if wrong.