Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis
By: Wenqing Zhang , Trang Nguyen , Elizabeth A. Stuart and more
Potential Business Impact:
Helps computers understand science papers better.
Systematic reviews are crucial for synthesizing scientific evidence but remain labor-intensive, especially when extracting detailed methodological information. Large language models (LLMs) offer potential for automating methodological assessments, promising to transform evidence synthesis. Here, using causal mediation analysis as a representative methodological domain, we benchmarked state-of-the-art LLMs against expert human reviewers across 180 full-text scientific articles. Model performance closely correlated with human judgments (accuracy correlation 0.71; F1 correlation 0.97), achieving near-human accuracy on straightforward, explicitly stated methodological criteria. However, accuracy sharply declined on complex, inference-intensive assessments, lagging expert reviewers by up to 15%. Errors commonly resulted from superficial linguistic cues -- for instance, models frequently misinterpreted keywords like "longitudinal" or "sensitivity" as automatic evidence of rigorous methodological approache, leading to systematic misclassifications. Longer documents yielded lower model accuracy, whereas publication year showed no significant effect. Our findings highlight an important pattern for practitioners using LLMs for methods review and synthesis from full texts: current LLMs excel at identifying explicit methodological features but require human oversight for nuanced interpretations. Integrating automated information extraction with targeted expert review thus provides a promising approach to enhance efficiency and methodological rigor in evidence synthesis across diverse scientific fields.
Similar Papers
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Computation and Language
Computers can't yet judge science papers well.
On the Use of Large Language Models for Qualitative Synthesis
Software Engineering
Helps doctors organize medical research faster.
Assessing the Reliability and Validity of Large Language Models for Automated Assessment of Student Essays in Higher Education
Computers and Society
AI can't reliably grade essays yet.