Automatic Reviewers Fail to Detect Faulty Reasoning in Research Papers: A New Counterfactual Evaluation Framework
By: Nils Dycke, Iryna Gurevych
Potential Business Impact:
AI can't spot bad research logic yet.
Large Language Models (LLMs) have great potential to accelerate and support scholarly peer review and are increasingly used as fully automatic review generators (ARGs). However, potential biases and systematic errors may pose significant risks to scientific integrity; understanding the specific capabilities and limitations of state-of-the-art ARGs is essential. We focus on a core reviewing skill that underpins high-quality peer review: detecting faulty research logic. This involves evaluating the internal consistency between a paper's results, interpretations, and claims. We present a fully automated counterfactual evaluation framework that isolates and tests this skill under controlled conditions. Testing a range of ARG approaches, we find that, contrary to expectation, flaws in research logic have no significant effect on their output reviews. Based on our findings, we derive three actionable recommendations for future work and release our counterfactual dataset and evaluation framework publicly.
Similar Papers
Addressing Logical Fallacies In Scientific Reasoning From Large Language Models: Towards a Dual-Inference Training Framework
Artificial Intelligence
Teaches AI to spot wrong ideas, not just right ones.
FLAWS: A Benchmark for Error Identification and Localization in Scientific Papers
Computation and Language
Helps computers find mistakes in science papers.
LLM-REVal: Can We Trust LLM Reviewers Yet?
Computation and Language
AI reviewers unfairly favor AI-written papers.