MMTutorBench: The First Multimodal Benchmark for AI Math Tutoring
By: Tengchao Yang , Sichen Guo , Mengzhao Jia and more
Potential Business Impact:
AI tutors learn to help students solve math problems.
Effective math tutoring requires not only solving problems but also diagnosing students' difficulties and guiding them step by step. While multimodal large language models (MLLMs) show promise, existing benchmarks largely overlook these tutoring skills. We introduce MMTutorBench, the first benchmark for AI math tutoring, consisting of 685 problems built around pedagogically significant key-steps. Each problem is paired with problem-specific rubrics that enable fine-grained evaluation across six dimensions, and structured into three tasks-Insight Discovery, Operation Formulation, and Operation Execution. We evaluate 12 leading MLLMs and find clear performance gaps between proprietary and open-source systems, substantial room compared to human tutors, and consistent trends across input variants: OCR pipelines degrade tutoring quality, few-shot prompting yields limited gains, and our rubric-based LLM-as-a-Judge proves highly reliable. These results highlight both the difficulty and diagnostic value of MMTutorBench for advancing AI tutoring.
Similar Papers
TutorBench: A Benchmark To Assess Tutoring Capabilities Of Large Language Models
Machine Learning (CS)
Tests AI tutors to help students learn better.
MATP-BENCH: Can MLLM Be a Good Automated Theorem Prover for Multimodal Problems?
Computation and Language
Helps computers prove math theorems using pictures.
Is your multimodal large language model a good science tutor?
Computation and Language
Teaches computers to be better science tutors.