In-Browser LLM-Guided Fuzzing for Real-Time Prompt Injection Testing in Agentic AI Browsers
By: Avihay Cohen
Potential Business Impact:
Finds hidden tricks that trick web robots.
Large Language Model (LLM) based agents integrated into web browsers (often called agentic AI browsers) offer powerful automation of web tasks. However, they are vulnerable to indirect prompt injection attacks, where malicious instructions hidden in a webpage deceive the agent into unwanted actions. These attacks can bypass traditional web security boundaries, as the AI agent operates with the user privileges across sites. In this paper, we present a novel fuzzing framework that runs entirely in the browser and is guided by an LLM to automatically discover such prompt injection vulnerabilities in real time.
Similar Papers
Exploiting Web Search Tools of AI Agents for Data Exfiltration
Cryptography and Security
Protects smart computer brains from being tricked.
Manipulating LLM Web Agents with Indirect Prompt Injection Attack via HTML Accessibility Tree
Cryptography and Security
Hackers can trick web robots into doing bad things.
BrowseSafe: Understanding and Preventing Prompt Injection Within AI Browser Agents
Machine Learning (CS)
Protects web browsers from AI trickery.