Score: 0

Multilingual Hidden Prompt Injection Attacks on LLM-Based Academic Reviewing

Published: December 29, 2025 | arXiv ID: 2512.23684v1

By: Panagiotis Theocharopoulos, Ajinkya Kulkarni, Mathew Magimai. -Doss

Potential Business Impact:

Makes AI reviewers unfairly change paper grades.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) are increasingly considered for use in high-impact workflows, including academic peer review. However, LLMs are vulnerable to document-level hidden prompt injection attacks. In this work, we construct a dataset of approximately 500 real academic papers accepted to ICML and evaluate the effect of embedding hidden adversarial prompts within these documents. Each paper is injected with semantically equivalent instructions in four different languages and reviewed using an LLM. We find that prompt injection induces substantial changes in review scores and accept/reject decisions for English, Japanese, and Chinese injections, while Arabic injections produce little to no effect. These results highlight the susceptibility of LLM-based reviewing systems to document-level prompt injection and reveal notable differences in vulnerability across languages.

Page Count
7 pages

Category
Computer Science:
Computation and Language