Learning to Make MISTAKEs: Modeling Incorrect Student Thinking And Key Errors
By: Alexis Ross, Jacob Andreas
Potential Business Impact:
Teaches computers to make smart mistakes.
Research on reasoning in language models (LMs) predominantly focuses on improving the correctness of their outputs. But some important applications require modeling reasoning patterns that are incorrect. For example, automated systems that can reason about and simulate student errors are useful for providing real-time feedback in the classroom or offline practice for educators-in-training. This paper presents a new method, MISTAKE, that (1) constructs high-quality synthetic examples of reasoning errors by leveraging cycle consistency between incorrect answers and latent misconceptions; and (2) uses the generated data to learn models for student simulation, misconception classification, and answer generation. We evaluate MISTAKE on three educational tasks and find that it results in (1) higher accuracy when simulating incorrect student answers based on specific misconceptions, (2) increased performance inferring latent misconceptions from observed incorrect answers, and (3) higher alignment with expert-written distractor answers when generating incorrect answers (e.g., for multiple-choice tests).
Similar Papers
Can Large Reasoning Models Improve Accuracy on Mathematical Tasks Using Flawed Thinking?
Machine Learning (CS)
Teaches computers to fix their math mistakes.
LEMMA: Learning from Errors for MatheMatical Advancement in LLMs
Machine Learning (CS)
Teaches computers to learn from math mistakes.
MalruleLib: Large-Scale Executable Misconception Reasoning with Step Traces for Modeling Student Thinking in Mathematics
Computation and Language
Teaches computers to spot math mistakes students make.