On the Effectiveness of Instruction-Tuning Local LLMs for Identifying Software Vulnerabilities
By: Sangryu Park, Gihyuk Ko, Homook Cho
Potential Business Impact:
Finds computer bugs without sharing code.
Large Language Models (LLMs) show significant promise in automating software vulnerability analysis, a critical task given the impact of security failure of modern software systems. However, current approaches in using LLMs to automate vulnerability analysis mostly rely on using online API-based LLM services, requiring the user to disclose the source code in development. Moreover, they predominantly frame the task as a binary classification(vulnerable or not vulnerable), limiting potential practical utility. This paper addresses these limitations by reformulating the problem as Software Vulnerability Identification (SVI), where LLMs are asked to output the type of weakness in Common Weakness Enumeration (CWE) IDs rather than simply indicating the presence or absence of a vulnerability. We also tackle the reliance on large, API-based LLMs by demonstrating that instruction-tuning smaller, locally deployable LLMs can achieve superior identification performance. In our analysis, instruct-tuning a local LLM showed better overall performance and cost trade-off than online API-based LLMs. Our findings indicate that instruct-tuned local models represent a more effective, secure, and practical approach for leveraging LLMs in real-world vulnerability management workflows.
Similar Papers
Everything You Wanted to Know About LLM-based Vulnerability Detection But Were Afraid to Ask
Cryptography and Security
Finds computer bugs better with more code info.
POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Cryptography and Security
Makes computers better at finding online dangers.
Evaluating LLMs for One-Shot Patching of Real and Artificial Vulnerabilities
Cryptography and Security
Fixes computer bugs automatically, better on real ones.