Score: 0

Technical Report: Evaluating Goal Drift in Language Model Agents

Published: May 5, 2025 | arXiv ID: 2505.02709v1

By: Rauno Arike , Elizabeth Donoway , Henning Bartsch and more

Potential Business Impact:

Keeps AI robots from forgetting their jobs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As language models (LMs) are increasingly deployed as autonomous agents, their robust adherence to human-assigned objectives becomes crucial for safe operation. When these agents operate independently for extended periods without human oversight, even initially well-specified goals may gradually shift. Detecting and measuring goal drift - an agent's tendency to deviate from its original objective over time - presents significant challenges, as goals can shift gradually, causing only subtle behavioral changes. This paper proposes a novel approach to analyzing goal drift in LM agents. In our experiments, agents are first explicitly given a goal through their system prompt, then exposed to competing objectives through environmental pressures. We demonstrate that while the best-performing agent (a scaffolded version of Claude 3.5 Sonnet) maintains nearly perfect goal adherence for more than 100,000 tokens in our most difficult evaluation setting, all evaluated models exhibit some degree of goal drift. We also find that goal drift correlates with models' increasing susceptibility to pattern-matching behaviors as the context length grows.

Page Count
36 pages

Category
Computer Science:
Artificial Intelligence