DocVAL: Validated Chain-of-Thought Distillation for Grounded Document VQA
By: Ahmad Mohammadshirazi , Pinaki Prasad Guha Neogi , Dheeraj Kulshrestha and more
Potential Business Impact:
Helps computers understand documents by seeing and reading.
Document visual question answering (DocVQA) requires models to jointly reason over textual content and spatial layout, yet current systems exhibit a sharp accuracy--efficiency trade-off: large teacher models achieve strong grounding but are too expensive for deployment, while compact students suffer substantial drops in localization performance. We propose DocVAL, a validated chain-of-thought distillation framework that transfers the spatial reasoning ability of a large teacher into a deployable student VLM through three key components: (1) teacher supervision with validation-time text detection to filter and denoise training signals, (2) a multi-module validator (VAL) that enforces answer correctness and geometric consistency while producing fine-grained, pixel-level error feedback, and (3) a two-stage student training scheme that first learns from validated CoT traces and then undergoes iterative refinement driven by VAL feedback. Our student (Gemma-3 12B) achieves 91.4\% ANLS and 82.4\% mAP on DocVQA as a pure VLM requiring no text detection or OCR at inference. Extensive ablations demonstrate that validated feedback contributes 6.3 mAP gain and iterative refinement accounts for 9.7 mAP improvement. We release 95k high-quality, validator-verified CoT traces to advance spatial reasoning research in document understanding.
Similar Papers
Chain-of-Visual-Thought: Teaching VLMs to See and Think Better with Continuous Visual Tokens
CV and Pattern Recognition
Lets computers see and understand pictures better.
Towards Faithful Reasoning in Comics for Small MLLMs
CV and Pattern Recognition
Helps computers understand funny comics and jokes.
FlipVQA-Miner: Cross-Page Visual Question-Answer Mining from Textbooks
Artificial Intelligence
Makes AI smarter using old school books.