Generative Large Language Model usage in Smart Contract Vulnerability Detection
By: Peter Ince , Jiangshan Yu , Joseph K. Liu and more
Potential Business Impact:
AI helps find bugs in online money contracts.
Recent years have seen an explosion of activity in Generative AI, specifically Large Language Models (LLMs), revolutionising applications across various fields. Smart contract vulnerability detection is no exception; as smart contracts exist on public chains and can have billions of dollars transacted daily, continuous improvement in vulnerability detection is crucial. This has led to many researchers investigating the usage of generative large language models (LLMs) to aid in detecting vulnerabilities in smart contracts. This paper presents a systematic review of the current LLM-based smart contract vulnerability detection tools, comparing them against traditional static and dynamic analysis tools Slither and Mythril. Our analysis highlights key areas where each performs better and shows that while these tools show promise, the LLM-based tools available for testing are not ready to replace more traditional tools. We conclude with recommendations on how LLMs are best used in the vulnerability detection process and offer insights for improving on the state-of-the-art via hybrid approaches and targeted pre-training of much smaller models.
Similar Papers
Leveraging Large Language Models and Machine Learning for Smart Contract Vulnerability Detection
Cryptography and Security
Finds hidden bugs in computer money code.
LLMpatronous: Harnessing the Power of LLMs For Vulnerability Detection
Cryptography and Security
AI finds computer bugs better and faster.
Large Language Models for Multilingual Vulnerability Detection: How Far Are We?
Software Engineering
Finds hidden computer bugs in many languages.