ARIAL: An Agentic Framework for Document VQA with Precise Answer Localization
By: Ahmad Mohammadshirazi , Pinaki Prasad Guha Neogi , Dheeraj Kulshrestha and more
Potential Business Impact:
Helps computers find exact answers in documents.
Document Visual Question Answering (VQA) requires models to not only extract accurate textual answers but also precisely localize them within document images, a capability critical for interpretability in high-stakes applications. However, existing systems achieve strong textual accuracy while producing unreliable spatial grounding, or sacrifice performance for interpretability. We present ARIAL (Agentic Reasoning for Interpretable Answer Localization), a modular framework that orchestrates specialized tools through an LLM-based planning agent to achieve both precise answer extraction and reliable spatial grounding. ARIAL decomposes Document VQA into structured subtasks: OCR-based text extraction with TrOCR, retrieval-augmented context selection using semantic search, answer generation via a fine-tuned Gemma 3-27B model, and explicit bounding-box localization through text-to-region alignment. This modular architecture produces transparent reasoning traces, enabling tool-level auditability and independent component optimization. We evaluate ARIAL on four benchmarks (DocVQA, FUNSD, CORD, and SROIE) using both textual accuracy (ANLS) and spatial precision (mAP at IoU 0.50 to 0.95). ARIAL achieves state-of-the-art results across all datasets: 88.7 ANLS and 50.1 mAP on DocVQA, 90.0 ANLS and 50.3 mAP on FUNSD, 85.5 ANLS and 60.2 mAP on CORD, and 93.1 ANLS on SROIE, surpassing the previous best method (DLaVA) by +2.8 ANLS and +3.9 mAP on DocVQA. Our work demonstrates how agentic orchestration of specialized tools can simultaneously improve performance and interpretability, providing a pathway toward trustworthy, explainable document AI systems.
Similar Papers
Words into World: A Task-Adaptive Agent for Language-Guided Spatial Retrieval in AR
CV and Pattern Recognition
Lets computers understand and interact with real-world objects.
AVATAAR: Agentic Video Answering via Temporal Adaptive Alignment and Reasoning
CV and Pattern Recognition
Helps computers understand long videos better.
Think Visually, Reason Textually: Vision-Language Synergy in ARC
CV and Pattern Recognition
Teaches computers to learn like humans do.