Efficient Code Analysis via Graph-Guided Large Language Models
By: Hang Gao , Tao Peng , Baoquan Cui and more
Potential Business Impact:
Finds hidden computer virus code in programs.
Large Language Models (LLMs) have significantly advanced code analysis tasks, yet they struggle to detect malicious behaviors fragmented across files, whose intricate dependencies easily get lost in the vast amount of benign code. We therefore propose a graph-centric attention acquisition pipeline that enhances LLMs' ability to localize malicious behavior. The approach parses a project into a code graph, uses an LLM to encode nodes with semantic and structural signals, and trains a Graph Neural Network (GNN) under sparse supervision. The GNN performs an initial detection, and by interpreting these predictions, identifies key code sections that are most likely to contain malicious behavior. These influential regions are then used to guide the LLM's attention for in-depth analysis. This strategy significantly reduces interference from irrelevant context while maintaining low annotation costs. Extensive experiments show that the method consistently outperforms existing approaches on multiple public and custom datasets, highlighting its potential for practical deployment in software security scenarios.
Similar Papers
Efficient Code Analysis via Graph-Guided Large Language Models
Software Engineering
Finds hidden bad code in computer programs.
A Decompilation-Driven Framework for Malware Detection with Large Language Models
Cryptography and Security
Helps computers spot bad computer programs.
Large Language Model (LLM) for Software Security: Code Analysis, Malware Analysis, Reverse Engineering
Cryptography and Security
Helps computers find computer viruses faster.