Too Easily Fooled? Prompt Injection Breaks LLMs on Frustratingly Simple Multiple-Choice Questions
By: Xuyang Guo , Zekai Huang , Zhao Song and more
Potential Business Impact:
Computers can be tricked by hidden instructions.
Large Language Models (LLMs) have recently demonstrated strong emergent abilities in complex reasoning and zero-shot generalization, showing unprecedented potential for LLM-as-a-judge applications in education, peer review, and data quality evaluation. However, their robustness under prompt injection attacks, where malicious instructions are embedded into the content to manipulate outputs, remains a significant concern. In this work, we explore a frustratingly simple yet effective attack setting to test whether LLMs can be easily misled. Specifically, we evaluate LLMs on basic arithmetic questions (e.g., "What is 3 + 2?") presented as either multiple-choice or true-false judgment problems within PDF files, where hidden prompts are injected into the file. Our results reveal that LLMs are indeed vulnerable to such hidden prompt injection attacks, even in these trivial scenarios, highlighting serious robustness risks for LLM-as-a-judge applications.
Similar Papers
Multimodal Prompt Injection Attacks: Risks and Defenses for Modern LLMs
Cryptography and Security
Finds ways AI can be tricked.
Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review
Cryptography and Security
Tricks AI into writing fake science reviews.
Publish to Perish: Prompt Injection Attacks on LLM-Assisted Peer Review
Cryptography and Security
Tricks AI reviewers to miss hidden bad ideas.