Technical Report: Evaluating Goal Drift in Language Model Agents
By: Rauno Arike , Elizabeth Donoway , Henning Bartsch and more
Potential Business Impact:
Keeps AI robots from forgetting their jobs.
As language models (LMs) are increasingly deployed as autonomous agents, their robust adherence to human-assigned objectives becomes crucial for safe operation. When these agents operate independently for extended periods without human oversight, even initially well-specified goals may gradually shift. Detecting and measuring goal drift - an agent's tendency to deviate from its original objective over time - presents significant challenges, as goals can shift gradually, causing only subtle behavioral changes. This paper proposes a novel approach to analyzing goal drift in LM agents. In our experiments, agents are first explicitly given a goal through their system prompt, then exposed to competing objectives through environmental pressures. We demonstrate that while the best-performing agent (a scaffolded version of Claude 3.5 Sonnet) maintains nearly perfect goal adherence for more than 100,000 tokens in our most difficult evaluation setting, all evaluated models exhibit some degree of goal drift. We also find that goal drift correlates with models' increasing susceptibility to pattern-matching behaviors as the context length grows.
Similar Papers
Evaluating the Goal-Directedness of Large Language Models
Artificial Intelligence
Helps AI focus better on its job.
Stay Focused: Problem Drift in Multi-Agent Debate
Computation and Language
Fixes AI arguments that wander off-topic.
Agent Drift: Quantifying Behavioral Degradation in Multi-Agent LLM Systems Over Extended Interactions
Artificial Intelligence
Keeps AI helpers working right for a long time.