CREST-Search: Comprehensive Red-teaming for Evaluating Safety Threats in Large Language Models Powered by Web Search
By: Haoran Ou , Kangjie Chen , Xingshuo Han and more
Potential Business Impact:
Finds hidden dangers in AI that uses the internet.
Large Language Models (LLMs) excel at tasks such as dialogue, summarization, and question answering, yet they struggle to adapt to specialized domains and evolving facts. To overcome this, web search has been integrated into LLMs, allowing real-time access to online content. However, this connection magnifies safety risks, as adversarial prompts combined with untrusted sources can cause severe vulnerabilities. We investigate red teaming for LLMs with web search and present CREST-Search, a framework that systematically exposes risks in such systems. Unlike existing methods for standalone LLMs, CREST-Search addresses the complex workflow of search-enabled models by generating adversarial queries with in-context learning and refining them through iterative feedback. We further construct WebSearch-Harm, a search-specific dataset to fine-tune LLMs into efficient red-teaming agents. Experiments show that CREST-Search effectively bypasses safety filters and reveals vulnerabilities in modern web-augmented LLMs, underscoring the need for specialized defenses to ensure trustworthy deployment.
Similar Papers
Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models
Computation and Language
Finds and fixes problems in smart computer programs.
A Red Teaming Roadmap Towards System-Level Safety
Cryptography and Security
Makes AI safer from bad people's tricks.
Leveraging Large Language Models for Cybersecurity Risk Assessment -- A Case from Forestry Cyber-Physical Systems
Software Engineering
Helps experts find computer dangers faster.