Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review
By: Matteo Gioele Collu , Umberto Salviati , Roberto Confalonieri and more
Potential Business Impact:
Tricks AI into writing fake science reviews.
Large Language Models (LLMs) are increasingly being integrated into the scientific peer-review process, raising new questions about their reliability and resilience to manipulation. In this work, we investigate the potential for hidden prompt injection attacks, where authors embed adversarial text within a paper's PDF to influence the LLM-generated review. We begin by formalising three distinct threat models that envision attackers with different motivations -- not all of which implying malicious intent. For each threat model, we design adversarial prompts that remain invisible to human readers yet can steer an LLM's output toward the author's desired outcome. Using a user study with domain scholars, we derive four representative reviewing prompts used to elicit peer reviews from LLMs. We then evaluate the robustness of our adversarial prompts across (i) different reviewing prompts, (ii) different commercial LLM-based systems, and (iii) different peer-reviewed papers. Our results show that adversarial prompts can reliably mislead the LLM, sometimes in ways that adversely affect a "honest-but-lazy" reviewer. Finally, we propose and empirically assess methods to reduce detectability of adversarial prompts under automated content checks.
Similar Papers
Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review
Cryptography and Security
Tricks AI reviewers to miss hidden bad ideas.
Prompt Injection Attacks on LLM Generated Reviews of Scientific Publications
Machine Learning (CS)
Makes AI reviewers unfairly accept almost everything.
Prompt-in-Content Attacks: Exploiting Uploaded Inputs to Hijack LLM Behavior
Cryptography and Security
Hides bad instructions in text to trick AI.