Score: 0

MMTutorBench: The First Multimodal Benchmark for AI Math Tutoring

Published: October 27, 2025 | arXiv ID: 2510.23477v1

By: Tengchao Yang , Sichen Guo , Mengzhao Jia and more

Potential Business Impact:

AI tutors learn to help students solve math problems.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Effective math tutoring requires not only solving problems but also diagnosing students' difficulties and guiding them step by step. While multimodal large language models (MLLMs) show promise, existing benchmarks largely overlook these tutoring skills. We introduce MMTutorBench, the first benchmark for AI math tutoring, consisting of 685 problems built around pedagogically significant key-steps. Each problem is paired with problem-specific rubrics that enable fine-grained evaluation across six dimensions, and structured into three tasks-Insight Discovery, Operation Formulation, and Operation Execution. We evaluate 12 leading MLLMs and find clear performance gaps between proprietary and open-source systems, substantial room compared to human tutors, and consistent trends across input variants: OCR pipelines degrade tutoring quality, few-shot prompting yields limited gains, and our rubric-based LLM-as-a-Judge proves highly reliable. These results highlight both the difficulty and diagnostic value of MMTutorBench for advancing AI tutoring.

Country of Origin
🇺🇸 United States

Page Count
17 pages

Category
Computer Science:
Computation and Language