Efficient Code Analysis via Graph-Guided Large Language Models
By: Hang Gao , Tao Peng , Baoquan Cui and more
Potential Business Impact:
Finds hidden bad code in computer programs.
Malicious behavior is often hidden in small, easily overlooked code fragments, especially within large and complex codebases. The cross-file dependencies of these fragments make it difficult for even powerful large language models (LLMs) to detect them reliably. We propose a graph-centric attention acquisition pipeline that enhances LLMs' ability to localize malicious behavior. The approach parses a project into a code graph, uses an LLM to encode nodes with semantic and structural signals, and trains a Graph Neural Network (GNN) under sparse supervision. The GNN performs an initial detection, and through backtracking of its predictions, identifies key code sections that are most likely to contain malicious behavior. These influential regions are then used to guide the LLM's attention for in-depth analysis. This strategy significantly reduces interference from irrelevant context while maintaining low annotation costs. Extensive experiments show that the method consistently outperforms existing methods on multiple public and self-built datasets, highlighting its potential for practical deployment in software security scenarios.
Similar Papers
Efficient Code Analysis via Graph-Guided Large Language Models
Software Engineering
Finds hidden computer virus code in programs.
A Decompilation-Driven Framework for Malware Detection with Large Language Models
Cryptography and Security
Helps computers spot bad computer programs.
Adversarial Attacks and Defenses on Graph-aware Large Language Models (LLMs)
Cryptography and Security
Protects smart AI from being tricked.