Score: 1

Efficient Code Analysis via Graph-Guided Large Language Models

Published: January 19, 2026 | arXiv ID: 2601.12890v1

By: Hang Gao , Tao Peng , Baoquan Cui and more

Potential Business Impact:

Finds hidden bad code in computer programs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Malicious behavior is often hidden in small, easily overlooked code fragments, especially within large and complex codebases. The cross-file dependencies of these fragments make it difficult for even powerful large language models (LLMs) to detect them reliably. We propose a graph-centric attention acquisition pipeline that enhances LLMs' ability to localize malicious behavior. The approach parses a project into a code graph, uses an LLM to encode nodes with semantic and structural signals, and trains a Graph Neural Network (GNN) under sparse supervision. The GNN performs an initial detection, and through backtracking of its predictions, identifies key code sections that are most likely to contain malicious behavior. These influential regions are then used to guide the LLM's attention for in-depth analysis. This strategy significantly reduces interference from irrelevant context while maintaining low annotation costs. Extensive experiments show that the method consistently outperforms existing methods on multiple public and self-built datasets, highlighting its potential for practical deployment in software security scenarios.

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Software Engineering