Score: 1

Explaining Software Vulnerabilities with Large Language Models

Published: November 6, 2025 | arXiv ID: 2511.04179v1

By: Oshando Johnson , Alexandra Fomina , Ranjith Krishnamurthy and more

Potential Business Impact:

Helps coders fix security problems faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The prevalence of security vulnerabilities has prompted companies to adopt static application security testing (SAST) tools for vulnerability detection. Nevertheless, these tools frequently exhibit usability limitations, as their generic warning messages do not sufficiently communicate important information to developers, resulting in misunderstandings or oversight of critical findings. In light of recent developments in Large Language Models (LLMs) and their text generation capabilities, our work investigates a hybrid approach that uses LLMs to tackle the SAST explainability challenges. In this paper, we present SAFE, an Integrated Development Environment (IDE) plugin that leverages GPT-4o to explain the causes, impacts, and mitigation strategies of vulnerabilities detected by SAST tools. Our expert user study findings indicate that the explanations generated by SAFE can significantly assist beginner to intermediate developers in understanding and addressing security vulnerabilities, thereby improving the overall usability of SAST tools.

Country of Origin
πŸ‡©πŸ‡ͺ πŸ‡ΊπŸ‡Έ United States, Germany

Page Count
5 pages

Category
Computer Science:
Software Engineering