Score: 2

Atomic Reasoning for Scientific Table Claim Verification

Published: June 8, 2025 | arXiv ID: 2506.06972v1

By: Yuji Zhang , Qingyun Wang , Cheng Qian and more

Potential Business Impact:

Helps computers check science facts from tables.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Scientific texts often convey authority due to their technical language and complex data. However, this complexity can sometimes lead to the spread of misinformation. Non-experts are particularly susceptible to misleading claims based on scientific tables due to their high information density and perceived credibility. Existing table claim verification models, including state-of-the-art large language models (LLMs), often struggle with precise fine-grained reasoning, resulting in errors and a lack of precision in verifying scientific claims. Inspired by Cognitive Load Theory, we propose that enhancing a model's ability to interpret table-based claims involves reducing cognitive load by developing modular, reusable reasoning components (i.e., atomic skills). We introduce a skill-chaining schema that dynamically composes these skills to facilitate more accurate and generalizable reasoning with a reduced cognitive load. To evaluate this, we create SciAtomicBench, a cross-domain benchmark with fine-grained reasoning annotations. With only 350 fine-tuning examples, our model trained by atomic reasoning outperforms GPT-4o's chain-of-thought method, achieving state-of-the-art results with far less training data.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language