Recontextualization Mitigates Specification Gaming without Modifying the Specification
By: Ariana Azarbal , Victor Gillioz , Vladimir Ivanov and more
Developers often struggle to specify correct training labels and rewards. Perhaps they don't need to. We propose recontextualization, which reduces how often language models "game" training signals, performing misbehaviors those signals mistakenly reinforce. We show recontextualization prevents models from learning to 1) prioritize evaluation metrics over chat response quality; 2) special-case code to pass incorrect tests; 3) lie to users; and 4) become sycophantic. Our method works by generating completions from prompts discouraging misbehavior and then recontextualizing them as though they were in response to prompts permitting misbehavior. Recontextualization trains language models to resist misbehavior even when instructions permit it. This mitigates the reinforcement of misbehavior from misspecified training signals, reducing specification gaming without improving the supervision signal.
Similar Papers
Context informs pragmatic interpretation in vision-language models
Computation and Language
Computers learn to understand context like people.
Synthetic Error Injection Fails to Elicit Self-Correction In Language Models
Artificial Intelligence
Teaching computers to fix their own mistakes failed.
Reasoning About Intent for Ambiguous Requests
Computation and Language
Shows computers many ways to answer confusing questions.