Chasing Shadows: Pitfalls in LLM Security Research
By: Jonathan Evertz , Niklas Risse , Nicolai Neuer and more
Large language models (LLMs) are increasingly prevalent in security research. Their unique characteristics, however, introduce challenges that undermine established paradigms of reproducibility, rigor, and evaluation. Prior work has identified common pitfalls in traditional machine learning research, but these studies predate the advent of LLMs. In this paper, we identify \emph{nine} common pitfalls that have become (more) relevant with the emergence of LLMs and that can compromise the validity of research involving them. These pitfalls span the entire computation process, from data collection, pre-training, and fine-tuning to prompting and evaluation. We assess the prevalence of these pitfalls across all 72 peer-reviewed papers published at leading Security and Software Engineering venues between 2023 and 2024. We find that every paper contains at least one pitfall, and each pitfall appears in multiple papers. Yet only 15.7\% of the present pitfalls were explicitly discussed, suggesting that the majority remain unrecognized. To understand their practical impact, we conduct four empirical case studies showing how individual pitfalls can mislead evaluation, inflate performance, or impair reproducibility. Based on our findings, we offer actionable guidelines to support the community in future work.
Similar Papers
Beyond Data Privacy: New Privacy Risks for Large Language Models
Cryptography and Security
Protects your secrets from smart computer programs.
A Survey on Data Security in Large Language Models
Cryptography and Security
Protects smart computer programs from bad data.
POLAR: Automating Cyber Threat Prioritization through LLM-Powered Assessment
Cryptography and Security
Makes computers better at finding online dangers.