Trust Me, I Know This Function: Hijacking LLM Static Analysis using Bias
By: Shir Bernstein , David Beste , Daniel Ayzenshteyn and more
Potential Business Impact:
Tricks AI code checkers into missing bugs.
Large Language Models (LLMs) are increasingly trusted to perform automated code review and static analysis at scale, supporting tasks such as vulnerability detection, summarization, and refactoring. In this paper, we identify and exploit a critical vulnerability in LLM-based code analysis: an abstraction bias that causes models to overgeneralize familiar programming patterns and overlook small, meaningful bugs. Adversaries can exploit this blind spot to hijack the control flow of the LLM's interpretation with minimal edits and without affecting actual runtime behavior. We refer to this attack as a Familiar Pattern Attack (FPA). We develop a fully automated, black-box algorithm that discovers and injects FPAs into target code. Our evaluation shows that FPAs are not only effective, but also transferable across models (GPT-4o, Claude 3.5, Gemini 2.0) and universal across programming languages (Python, C, Rust, Go). Moreover, FPAs remain effective even when models are explicitly warned about the attack via robust system prompts. Finally, we explore positive, defensive uses of FPAs and discuss their broader implications for the reliability and safety of code-oriented LLMs.
Similar Papers
Enhancing Semantic Understanding in Pointer Analysis using Large Language Models
Software Engineering
Helps computer programs find errors more accurately.
Static Analysis as a Feedback Loop: Enhancing LLM-Generated Code Beyond Correctness
Software Engineering
Makes computer code safer and easier to read.
Everything You Wanted to Know About LLM-based Vulnerability Detection But Were Afraid to Ask
Cryptography and Security
Finds computer bugs better with more code info.