Score: 0

When Refusals Fail: Unstable Safety Mechanisms in Long-Context LLM Agents

Published: December 2, 2025 | arXiv ID: 2512.02445v1

By: Tsimur Hadeliya , Mohammad Ali Jauhar , Nidhi Sakpal and more

Potential Business Impact:

Makes AI remember more to do harder jobs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Solving complex or long-horizon problems often requires large language models (LLMs) to use external tools and operate over a significantly longer context window. New LLMs enable longer context windows and support tool calling capabilities. Prior works have focused mainly on evaluation of LLMs on long-context prompts, leaving agentic setup relatively unexplored, both from capability and safety perspectives. Our work addresses this gap. We find that LLM agents could be sensitive to length, type, and placement of the context, exhibiting unexpected and inconsistent shifts in task performance and in refusals to execute harmful requests. Models with 1M-2M token context windows show severe degradation already at 100K tokens, with performance drops exceeding 50\% for both benign and harmful tasks. Refusal rates shift unpredictably: GPT-4.1-nano increases from $\sim$5\% to $\sim$40\% while Grok 4 Fast decreases from $\sim$80\% to $\sim$10\% at 200K tokens. Our work shows potential safety issues with agents operating on longer context and opens additional questions on the current metrics and paradigm for evaluating LLM agent safety on long multi-step tasks. In particular, our results on LLM agents reveal a notable divergence in both capability and safety performance compared to prior evaluations of LLMs on similar criteria.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)