Score: 0

Adversarial Attacks on LLM-as-a-Judge Systems: Insights from Prompt Injections

Published: April 25, 2025 | arXiv ID: 2504.18333v1

By: Narek Maloyan, Dmitry Namiot

Potential Business Impact:

Protects AI judges from being tricked.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

LLM as judge systems used to assess text quality code correctness and argument strength are vulnerable to prompt injection attacks. We introduce a framework that separates content author attacks from system prompt attacks and evaluate five models Gemma 3.27B Gemma 3.4B Llama 3.2 3B GPT 4 and Claude 3 Opus on four tasks with various defenses using fifty prompts per condition. Attacks achieved up to seventy three point eight percent success smaller models proved more vulnerable and transferability ranged from fifty point five to sixty two point six percent. Our results contrast with Universal Prompt Injection and AdvPrompter We recommend multi model committees and comparative scoring and release all code and datasets

Page Count
12 pages

Category
Computer Science:
Cryptography and Security