AdvJudge-Zero: Binary Decision Flips in LLM-as-a-Judge via Adversarial Control Tokens
By: Tung-Ling Li, Yuhao Wu, Hongliang Liu
Potential Business Impact:
Makes AI judges unfairly say "yes" to wrong answers.
Reward models and LLM-as-a-Judge systems are central to modern post-training pipelines such as RLHF, DPO, and RLAIF, where they provide scalar feedback and binary decisions that guide model selection and RL-based fine-tuning. We show that these judge systems exhibit a recurring vulnerability: short sequences of low-perplexity control tokens can flip many binary evaluations from correct ``No'' judgments to incorrect ``Yes'' judgments by steering the last-layer logit gap. These control tokens are patterns that a policy model could plausibly generate during post-training, and thus represent realistic reward-hacking risks rather than worst-case adversarial strings. Our method, AdvJudge-Zero, uses the model's next-token distribution and beam-search exploration to discover diverse control-token sequences from scratch, and our analysis shows that the induced hidden-state perturbations concentrate in a low-rank ``soft mode'' that is anti-aligned with the judge's refusal direction. Empirically, these tokens cause very high false positive rates when large open-weight and specialized judge models score incorrect answers on math and reasoning benchmarks. Finally, we show that LoRA-based adversarial training on small sets of control-token-augmented examples can markedly reduce these false positives while preserving evaluation quality.
Similar Papers
Efficient Online RFT with Plug-and-Play LLM Judges: Unlocking State-of-the-Art Performance
Machine Learning (CS)
Makes AI learn better with less computer power.
BadJudge: Backdoor Vulnerabilities of LLM-as-a-Judge
Computation and Language
Tricks AI judges to unfairly favor bad answers.
J1: Incentivizing Thinking in LLM-as-a-Judge via Reinforcement Learning
Computation and Language
Teaches AI to judge answers better by thinking.