Score: 0

Paraphrasing Adversarial Attack on LLM-as-a-Reviewer

Published: January 11, 2026 | arXiv ID: 2601.06884v1

By: Masahiro Kaneko

Potential Business Impact:

Tricks AI reviewers to give papers higher scores.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The use of large language models (LLMs) in peer review systems has attracted growing attention, making it essential to examine their potential vulnerabilities. Prior attacks rely on prompt injection, which alters manuscript content and conflates injection susceptibility with evaluation robustness. We propose the Paraphrasing Adversarial Attack (PAA), a black-box optimization method that searches for paraphrased sequences yielding higher review scores while preserving semantic equivalence and linguistic naturalness. PAA leverages in-context learning, using previous paraphrases and their scores to guide candidate generation. Experiments across five ML and NLP conferences with three LLM reviewers and five attacking models show that PAA consistently increases review scores without changing the paper's claims. Human evaluation confirms that generated paraphrases maintain meaning and naturalness. We also find that attacked papers exhibit increased perplexity in reviews, offering a potential detection signal, and that paraphrasing submissions can partially mitigate attacks.

Country of Origin
🇦🇪 United Arab Emirates

Page Count
14 pages

Category
Computer Science:
Computation and Language