Score: 1

LLaVul: A Multimodal LLM for Interpretable Vulnerability Reasoning about Source Code

Published: September 22, 2025 | arXiv ID: 2509.17337v1

By: Ala Jararweh , Michael Adams , Avinash Sahu and more

Potential Business Impact:

Finds hidden computer program mistakes.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Increasing complexity in software systems places a growing demand on reasoning tools that unlock vulnerabilities manifest in source code. Many current approaches focus on vulnerability analysis as a classifying task, oversimplifying the nuanced and context-dependent real-world scenarios. Even though current code large language models (LLMs) excel in code understanding, they often pay little attention to security-specific reasoning. We propose LLaVul, a multimodal LLM tailored to provide fine-grained reasoning about code through question-answering (QA). Our model is trained to integrate paired code and natural queries into a unified space, enhancing reasoning and context-dependent insights about code vulnerability. To evaluate our model performance, we construct a curated dataset of real-world vulnerabilities paired with security-focused questions and answers. Our model outperforms state-of-the-art general-purpose and code LLMs in the QA and detection tasks. We further explain decision-making by conducting qualitative analysis to highlight capabilities and limitations. By integrating code and QA, LLaVul enables more interpretable and security-focused code understanding.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Artificial Intelligence