An Initial Exploration of Fine-tuning Small Language Models for Smart Contract Reentrancy Vulnerability Detection
By: Ignacio Mariano Andreozzi Pofcher, Joshua Ellul
Potential Business Impact:
Finds coding mistakes in smart contracts.
Large Language Models (LLMs) are being used more and more for various coding tasks, including to help coders identify bugs and are a promising avenue to support coders in various tasks including vulnerability detection -- particularly given the flexibility of such generative AI models and tools. Yet for many tasks it may not be suitable to use LLMs, for which it may be more suitable to use smaller language models that can fit and easily execute and train on a developer's computer. In this paper we explore and evaluate whether smaller language models can be fine-tuned to achieve reasonable results for a niche area: vulnerability detection -- specifically focusing on detecting the reentrancy bug in Solidity smart contracts.
Similar Papers
Generative Large Language Model usage in Smart Contract Vulnerability Detection
Cryptography and Security
AI helps find bugs in online money contracts.
Leveraging Large Language Models and Machine Learning for Smart Contract Vulnerability Detection
Cryptography and Security
Finds hidden bugs in computer money code.
Logic Meets Magic: LLMs Cracking Smart Contract Vulnerabilities
Cryptography and Security
Finds hidden mistakes in online money code.