Score: 0

Large language models for automated PRISMA 2020 adherence checking

Published: November 20, 2025 | arXiv ID: 2511.16707v1

By: Yuki Kataoka , Ryuhei So , Masahiro Banno and more

Potential Business Impact:

Helps computers check if science papers follow rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Evaluating adherence to PRISMA 2020 guideline remains a burden in the peer review process. To address the lack of shareable benchmarks, we constructed a copyright-aware benchmark of 108 Creative Commons-licensed systematic reviews and evaluated ten large language models (LLMs) across five input formats. In a development cohort, supplying structured PRISMA 2020 checklists (Markdown, JSON, XML, or plain text) yielded 78.7-79.7% accuracy versus 45.21% for manuscript-only input (p less than 0.0001), with no differences between structured formats (p>0.9). Across models, accuracy ranged from 70.6-82.8% with distinct sensitivity-specificity trade-offs, replicated in an independent validation cohort. We then selected Qwen3-Max (a high-sensitivity open-weight model) and extended evaluation to the full dataset (n=120), achieving 95.1% sensitivity and 49.3% specificity. Structured checklist provision substantially improves LLM-based PRISMA assessment, though human expert verification remains essential before editorial decisions.

Page Count
44 pages

Category
Computer Science:
Software Engineering