Malicious GenAI Chrome Extensions: Unpacking Data Exfiltration and Malicious Behaviours
By: Shresta B. Seetharam, Mohamed Nabeel, William Melicher
Potential Business Impact:
Finds fake AI tools stealing your info.
The rapid proliferation of AI and GenAI tools has extended to the Chrome Web Store. Cybercriminals are exploiting this trend, deploying malicious Chrome extensions posing as AI tools or impersonating popular GenAI models to target users. These extensions often appear legitimate while secretly exfiltrating sensitive data or redirecting users web traffic to attacker-controlled domains. To examine the impact of this trend on the browser extension ecosystem, we curated a dataset of 5,551 AI-themed extensions released over a nine-month period to the Chrome Web Store. Using a multi-signal detection methodology that combines manifest analysis, domain reputation, and runtime network behavior, supplemented with human review, we identified 154 previously undetected malicious Chrome extensions. Together with extensions known from public threat research disclosures, this resulted in a final set of 341 malicious extensions for analysis. Of these, 29 were GenAI-related, forming the focus of our in-depth analysis and disclosure. We deconstruct representative GenAI cases, including Supersonic AI, DeepSeek AI | Free AI Assistant, and Perplexity Search, to illustrate attacker techniques such as Adversary-in-the-Browser, impersonation, bait-and-switch updates, query hijacking, and redirection. Our findings show that threat actors are leveraging GenAI trends and exploiting browser extension APIs and settings for malicious purposes. This demonstrates that the browser extension threat landscape is directly evolving alongside the rapid adoption of GenAI technologies.
Similar Papers
A Study on Malicious Browser Extensions in 2025
Cryptography and Security
Hackers trick browsers, stealing data and money.
Synthetic Data: AI's New Weapon Against Android Malware
Cryptography and Security
Creates fake malware to train phone security.
When AI Takes the Wheel: Security Analysis of Framework-Constrained Program Generation
Software Engineering
Finds security flaws in computer programs made by AI.