Privacy-Preserving Prompt Injection Detection for LLMs Using Federated Learning and Embedding-Based NLP Classification
By: Hasini Jayathilaka
Potential Business Impact:
Protects smart computers from being tricked safely.
Prompt injection attacks are an emerging threat to large language models (LLMs), enabling malicious users to manipulate outputs through carefully designed inputs. Existing detection approaches often require centralizing prompt data, creating significant privacy risks. This paper proposes a privacy-preserving prompt injection detection framework based on federated learning and embedding-based classification. A curated dataset of benign and adversarial prompts was encoded with sentence embedding and used to train both centralized and federated logistic regression models. The federated approach preserved privacy by sharing only model parameters across clients, while achieving detection performance comparable to centralized training. Results demonstrate that effective prompt injection detection is feasible without exposing raw data, making this one of the first explorations of federated security for LLMs. Although the dataset is limited in scale, the findings establish a strong proof-of-concept and highlight new directions for building secure and privacy-aware LLM systems.
Similar Papers
Detecting Prompt Injection Attacks Against Application Using Classifiers
Cryptography and Security
Stops bad instructions from breaking computer programs.
Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems
Cryptography and Security
Stops bad guys from stealing secrets from smart computer programs.
Prompt Fencing: A Cryptographic Approach to Establishing Security Boundaries in Large Language Model Prompts
Cryptography and Security
Keeps smart computer programs safe from bad instructions.