Unveiling Malicious Logic: Towards a Statement-Level Taxonomy and Dataset for Securing Python Packages
By: Ahmed Ryan , Junaid Mansur Ifti , Md Erfan and more
Potential Business Impact:
Finds bad code hidden in software packages.
The widespread adoption of open-source ecosystems enables developers to integrate third-party packages, but also exposes them to malicious packages crafted to execute harmful behavior via public repositories such as PyPI. Existing datasets (e.g., pypi-malregistry, DataDog, OpenSSF, MalwareBench) label packages as malicious or benign at the package level, but do not specify which statements implement malicious behavior. This coarse granularity limits research and practice: models cannot be trained to localize malicious code, detectors cannot justify alerts with code-level evidence, and analysts cannot systematically study recurring malicious indicators or attack chains. To address this gap, we construct a statement-level dataset of 370 malicious Python packages (833 files, 90,527 lines) with 2,962 labeled occurrences of malicious indicators. From these annotations, we derive a fine-grained taxonomy of 47 malicious indicators across 7 types that capture how adversarial behavior is implemented in code, and we apply sequential pattern mining to uncover recurring indicator sequences that characterize common attack workflows. Our contribution enables explainable, behavior-centric detection and supports both semantic-aware model training and practical heuristics for strengthening software supply-chain defenses.
Similar Papers
Towards Classifying Benign And Malicious Packages Using Machine Learning
Cryptography and Security
Finds bad computer code before it causes harm.
MalGuard: Towards Real-Time, Accurate, and Actionable Detection of Malicious Packages in PyPI Ecosystem
Cryptography and Security
Finds bad computer code before it causes harm.
MASCOT: Analyzing Malware Evolution Through A Well-Curated Source Code Dataset
Cryptography and Security
Untangles computer virus family trees to find new threats.