Score: 0

Does Self-Evaluation Enable Wireheading in Language Models?

Published: November 28, 2025 | arXiv ID: 2511.23092v1

By: David Demitri Africa, Hans Ethan Ting

Potential Business Impact:

Makes AI cheat itself instead of learning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Self-evaluation is increasingly central to language model training, from constitutional AI to self-refinement. We investigate whether coupling self-evaluation to reward signals creates incentives for wireheading, where agents manipulate reward measurements rather than improving task performance. We formalize conditions under which reward-channel control strictly dominates task-focused behavior in POMDPs and test these predictions empirically. Across two models and three tasks, we find that models whose self-grades determine rewards exhibit substantial grade inflation without corresponding accuracy gains, particularly on ambiguous tasks like summarization. Models that self-evaluate but do not control rewards show no such inflation. Our results demonstrate that self-evaluation is safe when decoupled from learning signals but dangerous when coupled, with clear implications for agentic system design.

Page Count
5 pages

Category
Computer Science:
Artificial Intelligence