Constructing and Benchmarking: a Labeled Email Dataset for Text-Based Phishing and Spam Detection Framework
By: Rebeka Toth, Tamas Bisztray, Richard Dubniczky
Potential Business Impact:
Finds fake emails made by smart computer programs.
Phishing and spam emails remain a major cybersecurity threat, with attackers increasingly leveraging Large Language Models (LLMs) to craft highly deceptive content. This study presents a comprehensive email dataset containing phishing, spam, and legitimate messages, explicitly distinguishing between human- and LLM-generated content. Each email is annotated with its category, emotional appeal (e.g., urgency, fear, authority), and underlying motivation (e.g., link-following, credential theft, financial fraud). We benchmark multiple LLMs on their ability to identify these emotional and motivational cues and select the most reliable model to annotate the full dataset. To evaluate classification robustness, emails were also rephrased using several LLMs while preserving meaning and intent. A state-of-the-art LLM was then assessed on its performance across both original and rephrased emails using expert-labeled ground truth. The results highlight strong phishing detection capabilities but reveal persistent challenges in distinguishing spam from legitimate emails. Our dataset and evaluation framework contribute to improving AI-assisted email security systems. To support open science, all code, templates, and resources are available on our project site.
Similar Papers
Robust ML-based Detection of Conventional, LLM-Generated, and Adversarial Phishing Emails Using Advanced Text Preprocessing
Cryptography and Security
Stops fake emails from tricking you.
Phishing Email Detection Using Large Language Models
Cryptography and Security
Finds fake emails better, even tricky ones.
LLM-Powered Intent-Based Categorization of Phishing Emails
Cryptography and Security
Finds fake emails by understanding what they want.