BadScientist: Can a Research Agent Write Convincing but Unsound Papers that Fool LLM Reviewers?
By: Fengqing Jiang , Yichen Feng , Yuetai Li and more
Potential Business Impact:
AI can trick other AI into accepting fake science.
The convergence of LLM-powered research assistants and AI-based peer review systems creates a critical vulnerability: fully automated publication loops where AI-generated research is evaluated by AI reviewers without human oversight. We investigate this through \textbf{BadScientist}, a framework that evaluates whether fabrication-oriented paper generation agents can deceive multi-model LLM review systems. Our generator employs presentation-manipulation strategies requiring no real experiments. We develop a rigorous evaluation framework with formal error guarantees (concentration bounds and calibration analysis), calibrated on real data. Our results reveal systematic vulnerabilities: fabricated papers achieve acceptance rates up to . Critically, we identify \textit{concern-acceptance conflict} -- reviewers frequently flag integrity issues yet assign acceptance-level scores. Our mitigation strategies show only marginal improvements, with detection accuracy barely exceeding random chance. Despite provably sound aggregation mathematics, integrity checking systematically fails, exposing fundamental limitations in current AI-driven review systems and underscoring the urgent need for defense-in-depth safeguards in scientific publishing.
Similar Papers
When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection
Artificial Intelligence
Tricks AI judges to accept bad science papers.
When Reject Turns into Accept: Quantifying the Vulnerability of LLM-Based Scientific Reviewers to Indirect Prompt Injection
Artificial Intelligence
Makes AI judges for papers easily fooled.
Why LLMs Aren't Scientists Yet: Lessons from Four Autonomous Research Attempts
Machine Learning (CS)
AI wrote a science paper that got accepted.