Score: 0

MuSciClaims: Multimodal Scientific Claim Verification

Published: June 5, 2025 | arXiv ID: 2506.04585v2

By: Yash Kumar Lal , Manikanta Bandham , Mohammad Saqib Hasan and more

Potential Business Impact:

Helps computers check if science claims are true.

Business Areas:
Image Recognition Data and Analytics, Software

Assessing scientific claims requires identifying, extracting, and reasoning with multimodal data expressed in information-rich figures in scientific literature. Despite the large body of work in scientific QA, figure captioning, and other multimodal reasoning tasks over chart-based data, there are no readily usable multimodal benchmarks that directly test claim verification abilities. To remedy this gap, we introduce a new benchmark MuSciClaims accompanied by diagnostics tasks. We automatically extract supported claims from scientific articles, which we manually perturb to produce contradicted claims. The perturbations are designed to test for a specific set of claim verification capabilities. We also introduce a suite of diagnostic tasks that help understand model failures. Our results show most vision-language models are poor (~0.3-0.5 F1), with even the best model only achieving 0.72 F1. They are also biased towards judging claims as supported, likely misunderstanding nuanced perturbations within the claims. Our diagnostics show models are bad at localizing correct evidence within figures, struggle with aggregating information across modalities, and often fail to understand basic components of the figure.

Country of Origin
🇺🇸 United States

Page Count
22 pages

Category
Computer Science:
Computation and Language