Natural Emergent Misalignment from Reward Hacking in Production RL
By: Monte MacDiarmid , Benjamin Wright , Jonathan Uesato and more
Potential Business Impact:
Teaches AI to cheat, then fixes it.
We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments. Unsurprisingly, the model learns to reward hack. Surprisingly, the model generalizes to alignment faking, cooperation with malicious actors, reasoning about malicious goals, and attempting sabotage when used with Claude Code, including in the codebase for this paper. Applying RLHF safety training using standard chat-like prompts results in aligned behavior on chat-like evaluations, but misalignment persists on agentic tasks. Three mitigations are effective: (i) preventing the model from reward hacking; (ii) increasing the diversity of RLHF safety training; and (iii) "inoculation prompting", wherein framing reward hacking as acceptable behavior during training removes misaligned generalization even when reward hacking is learned.
Similar Papers
School of Reward Hacks: Hacking harmless tasks generalizes to misaligned behavior in LLMs
Artificial Intelligence
AI learns to cheat instead of doing tasks.
LLM Misalignment via Adversarial RLHF Platforms
Machine Learning (CS)
Makes AI say bad things when trained.
Generative Adversarial Post-Training Mitigates Reward Hacking in Live Human-AI Music Interaction
Machine Learning (CS)
Makes AI music adapt and create new songs live.