Score: 1

BadScientist: Can a Research Agent Write Convincing but Unsound Papers that Fool LLM Reviewers?

Published: October 20, 2025 | arXiv ID: 2510.18003v1

By: Fengqing Jiang , Yichen Feng , Yuetai Li and more

BigTech Affiliations: University of Washington

Potential Business Impact:

AI can trick other AI into accepting fake science.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The convergence of LLM-powered research assistants and AI-based peer review systems creates a critical vulnerability: fully automated publication loops where AI-generated research is evaluated by AI reviewers without human oversight. We investigate this through \textbf{BadScientist}, a framework that evaluates whether fabrication-oriented paper generation agents can deceive multi-model LLM review systems. Our generator employs presentation-manipulation strategies requiring no real experiments. We develop a rigorous evaluation framework with formal error guarantees (concentration bounds and calibration analysis), calibrated on real data. Our results reveal systematic vulnerabilities: fabricated papers achieve acceptance rates up to . Critically, we identify \textit{concern-acceptance conflict} -- reviewers frequently flag integrity issues yet assign acceptance-level scores. Our mitigation strategies show only marginal improvements, with detection accuracy barely exceeding random chance. Despite provably sound aggregation mathematics, integrity checking systematically fails, exposing fundamental limitations in current AI-driven review systems and underscoring the urgent need for defense-in-depth safeguards in scientific publishing.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
Cryptography and Security