Agentic Reinforcement Learning for Search is Unsafe
By: Yushi Yang , Shreyansh Padarha , Andrew Lee and more
Potential Business Impact:
Makes smart AI tools unsafe with simple tricks.
Agentic reinforcement learning (RL) trains large language models to autonomously call tools during reasoning, with search as the most common application. These models excel at multi-step reasoning tasks, but their safety properties are not well understood. In this study, we show that RL-trained search models inherit refusal from instruction tuning and often deflect harmful requests by turning them into safe queries. However, this safety is fragile. Two simple attacks, one that forces the model to begin response with search (Search attack), another that encourages models to repeatedly search (Multi-search attack), trigger cascades of harmful searches and answers. Across two model families (Qwen, Llama) with both local and web search, these attacks lower refusal rates by up to 60.0%, answer safety by 82.5%, and search-query safety by 82.4%. The attacks succeed by triggering models to generate harmful, request-mirroring search queries before they can generate the inherited refusal tokens. This exposes a core weakness of current RL training: it rewards continued generation of effective queries without accounting for their harmfulness. As a result, RL search models have vulnerabilities that users can easily exploit, making it urgent to develop safety-aware agentic RL pipelines optimising for safe search.
Similar Papers
SafeSearch: Do Not Trade Safety for Utility in LLM Search Agents
Computation and Language
Makes AI search engines safer and more helpful.
A Comprehensive Survey on Reinforcement Learning-based Agentic Search: Foundations, Roles, Optimizations, Evaluations, and Applications
Artificial Intelligence
Teaches computers to find better answers online.
Adversarial Reinforcement Learning for Large Language Model Agent Safety
Machine Learning (CS)
Protects smart computer helpers from sneaky tricks.