Score: 0

LLM vs. SAST: A Technical Analysis on Detecting Coding Bugs of GPT4-Advanced Data Analysis

Published: June 18, 2025 | arXiv ID: 2506.15212v1

By: Madjid G. Tehrani , Eldar Sultanow , William J. Buchanan and more

Potential Business Impact:

Finds computer bugs better than old tools.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

With the rapid advancements in Natural Language Processing (NLP), large language models (LLMs) like GPT-4 have gained significant traction in diverse applications, including security vulnerability scanning. This paper investigates the efficacy of GPT-4 in identifying software vulnerabilities compared to traditional Static Application Security Testing (SAST) tools. Drawing from an array of security mistakes, our analysis underscores the potent capabilities of GPT-4 in LLM-enhanced vulnerability scanning. We unveiled that GPT-4 (Advanced Data Analysis) outperforms SAST by an accuracy of 94% in detecting 32 types of exploitable vulnerabilities. This study also addresses the potential security concerns surrounding LLMs, emphasising the imperative of security by design/default and other security best practices for AI.

Page Count
17 pages

Category
Computer Science:
Cryptography and Security