Secret Breach Detection in Source Code with Large Language Models
By: Md Nafiu Rahman , Sadif Ahmed , Zahin Wahab and more
Potential Business Impact:
Finds secret codes hidden in computer programs.
Background: Leaking sensitive information - such as API keys, tokens, and credentials - in source code remains a persistent security threat. Traditional regex and entropy-based tools often generate high false positives due to limited contextual understanding. Aims: This work aims to enhance secret detection in source code using large language models (LLMs), reducing false positives while maintaining high recall. We also evaluate the feasibility of using fine-tuned, smaller models for local deployment. Method: We propose a hybrid approach combining regex-based candidate extraction with LLM-based classification. We evaluate pre-trained and fine-tuned variants of various Large Language Models on a benchmark dataset from 818 GitHub repositories. Various prompting strategies and efficient fine-tuning methods are employed for both binary and multiclass classification. Results: The fine-tuned LLaMA-3.1 8B model achieved an F1-score of 0.9852 in binary classification, outperforming regex-only baselines. For multiclass classification, Mistral-7B reached 0.982 accuracy. Fine-tuning significantly improved performance across all models. Conclusions: Fine-tuned LLMs offer an effective and scalable solution for secret detection, greatly reducing false positives. Open-source models provide a practical alternative to commercial APIs, enabling secure and cost-efficient deployment in development workflows.
Similar Papers
Case Study: Fine-tuning Small Language Models for Accurate and Private CWE Detection in Python Code
Cryptography and Security
Finds computer bugs locally, safely, and fast.
Detecting Hard-Coded Credentials in Software Repositories via LLMs
Cryptography and Security
Finds secret codes hidden in computer programs.
LLMs in Code Vulnerability Analysis: A Proof of Concept
Software Engineering
Helps computers find and fix code mistakes.