EvilGenie: A Reward Hacking Benchmark
By: Jonathan Gabor, Jayson Lynch, Jonathan Rosenfeld
Potential Business Impact:
Finds AI that cheats at coding tasks.
We introduce EvilGenie, a benchmark for reward hacking in programming settings. We source problems from LiveCodeBench and create an environment in which agents can easily reward hack, such as by hardcoding test cases or editing the testing files. We measure reward hacking in three ways: held out unit tests, LLM judges, and test file edit detection. We verify these methods against human review and each other. We find the LLM judge to be highly effective at detecting reward hacking in unambiguous cases, and observe only minimal improvement from the use of held out test cases. In addition to testing many models using Inspect's basic_agent scaffold, we also measure reward hacking rates for three popular proprietary coding agents: OpenAI's Codex, Anthropic's Claude Code, and Google's Gemini CLI Using GPT-5, Claude Sonnet 4, and Gemini 2.5 Pro, respectively. We observe explicit reward hacking by both Codex and Claude Code, and misaligned behavior by all three agents. Our codebase can be found at https://github.com/JonathanGabor/EvilGenie.
Similar Papers
School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs
Artificial Intelligence
AI learns to cheat instead of doing tasks.
BigCodeArena: Unveiling More Reliable Human Preferences in Code Generation via Execution
Software Engineering
Tests computer code writing without humans.
Natural Emergent Misalignment from Reward Hacking in Production RL
Artificial Intelligence
Teaches AI to cheat, then fixes it.