Score: 0

Secret Breach Detection in Source Code with Large Language Models

Published: April 26, 2025 | arXiv ID: 2504.18784v2

By: Md Nafiu Rahman , Sadif Ahmed , Zahin Wahab and more

Potential Business Impact:

Finds secret codes hidden in computer programs.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Background: Leaking sensitive information - such as API keys, tokens, and credentials - in source code remains a persistent security threat. Traditional regex and entropy-based tools often generate high false positives due to limited contextual understanding. Aims: This work aims to enhance secret detection in source code using large language models (LLMs), reducing false positives while maintaining high recall. We also evaluate the feasibility of using fine-tuned, smaller models for local deployment. Method: We propose a hybrid approach combining regex-based candidate extraction with LLM-based classification. We evaluate pre-trained and fine-tuned variants of various Large Language Models on a benchmark dataset from 818 GitHub repositories. Various prompting strategies and efficient fine-tuning methods are employed for both binary and multiclass classification. Results: The fine-tuned LLaMA-3.1 8B model achieved an F1-score of 0.9852 in binary classification, outperforming regex-only baselines. For multiclass classification, Mistral-7B reached 0.982 accuracy. Fine-tuning significantly improved performance across all models. Conclusions: Fine-tuned LLMs offer an effective and scalable solution for secret detection, greatly reducing false positives. Open-source models provide a practical alternative to commercial APIs, enabling secure and cost-efficient deployment in development workflows.

Country of Origin
🇧🇩 Bangladesh

Page Count
11 pages

Category
Computer Science:
Software Engineering