MathDoc: Benchmarking Structured Extraction and Active Refusal on Noisy Mathematics Exam Papers
By: Chenyue Zhou , Jiayi Tuo , Shitong Qin and more
The automated extraction of structured questions from paper-based mathematics exams is fundamental to intelligent education, yet remains challenging in real-world settings due to severe visual noise. Existing benchmarks mainly focus on clean documents or generic layout analysis, overlooking both the structural integrity of mathematical problems and the ability of models to actively reject incomplete inputs. We introduce MathDoc, the first benchmark for document-level information extraction from authentic high school mathematics exam papers. MathDoc contains \textbf{3,609} carefully curated questions with real-world artifacts and explicitly includes unrecognizable samples to evaluate active refusal behavior. We propose a multi-dimensional evaluation framework covering stem accuracy, visual similarity, and refusal capability. Experiments on SOTA MLLMs, including Qwen3-VL and Gemini-2.5-Pro, show that although end-to-end models achieve strong extraction performance, they consistently fail to refuse illegible inputs, instead producing confident but invalid outputs. These results highlight a critical gap in current MLLMs and establish MathDoc as a benchmark for assessing model reliability under degraded document conditions. Our project repository is available at \href{https://github.com/winnk123/papers/tree/master}{GitHub repository}
Similar Papers
Benchmarking Document Parsers on Mathematical Formula Extraction from PDFs
CV and Pattern Recognition
Lets computers understand math in papers.
Grading Handwritten Engineering Exams with Multimodal Large Language Models
CV and Pattern Recognition
Grades handwritten science tests automatically and accurately.
DocVAL: Validated Chain-of-Thought Distillation for Grounded Document VQA
CV and Pattern Recognition
Helps computers understand documents by seeing and reading.