Explaining Software Vulnerabilities with Large Language Models
By: Oshando Johnson , Alexandra Fomina , Ranjith Krishnamurthy and more
Potential Business Impact:
Helps coders fix security problems faster.
The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability detection. Nevertheless, these tools frequently exhibit usability limitations, as their generic warning messages do not sufficiently communicate important information to developers, resulting in misunderstandings or oversight of critical findings. In light of recent developments in Large Language Models (LLMs) and their text generation capabilities, our work investigates a hybrid approach that uses LLMs to tackle the SAST explainability challenges. In this paper, we present SAFE, an Integrated Development Environment (IDE) plugin that leverages GPT-4o to explain the causes, impacts, and mitigation strategies of vulnerabilities detected by SAST tools. Our expert user study findings indicate that the explanations generated by SAFE can significantly assist beginner to intermediate developers in understanding and addressing security vulnerabilities, thereby improving the overall usability of SAST tools.
Similar Papers
LLM vs. SAST: A Technical Analysis on Detecting Coding Bugs of GPT4-Advanced Data Analysis
Cryptography and Security
Finds computer bugs better than old tools.
LLM-Driven SAST-Genius: A Hybrid Static Analysis Framework for Comprehensive and Actionable Security
Cryptography and Security
Finds computer bugs better, with fewer mistakes.
Rethinking Autonomy: Preventing Failures in AI-Driven Software Engineering
Software Engineering
Makes AI writing computer code safer.