Robust Diagram Reasoning: A Framework for Enhancing LVLM Performance on Visually Perturbed Scientific Diagrams
By: Minghao Zhou , Rafael Souza , Yaqian Hu and more
Potential Business Impact:
Helps computers understand messy science pictures.
Large Language Models (LLMs) and their multimodal variants (LVLMs) hold immense promise for scientific and engineering applications, particularly in processing visual information like scientific diagrams. However, their practical deployment is hindered by a critical lack of robustness to common visual perturbations such as noise, blur, and occlusions, which are prevalent in real-world scientific documents. Existing evaluation benchmarks largely overlook this challenge, leaving the robust reasoning capabilities of LVLMs on visually degraded scientific diagrams underexplored. To address this, we introduce the Robust Diagram Reasoning (RDR) framework, a novel approach designed to enhance and rigorously evaluate LVLMs' performance under such conditions. At its core, RDR employs an Adaptive Multi-View & Consistency Verification (AMCV) mechanism, which involves generating multiple perturbed versions of a diagram, performing parallel inference, and then applying a consistency-based self-correction loop. We also propose two new metrics, Perturbation Robustness Score (PRS) and Visual Degradation Consistency (VDC), to quantify robustness. Furthermore, we construct SciDiagram-Robust, the first large-scale scientific diagram question-answering dataset specifically augmented with diverse, programmatically generated visual perturbations. Our extensive experiments demonstrate that even state-of-the-art closed-source LVLMs like GPT-4V exhibit significant performance degradation when faced with perturbed inputs (Clean Accuracy 85.2% vs. PRS 72.1%).
Similar Papers
Robust-R1: Degradation-Aware Reasoning for Robust Visual Understanding
CV and Pattern Recognition
Helps AI see clearly even when pictures are blurry.
More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models
CV and Pattern Recognition
Makes AI better at seeing and thinking.
LVMed-R2: Perception and Reflection-driven Complex Reasoning for Medical Report Generation
Computation and Language
Helps computers write better doctor reports.