Score: 3

Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks

Published: July 3, 2025 | arXiv ID: 2507.02735v1

By: Sizhe Chen , Arman Zharmagambetov , David Wagner and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes AI safer from sneaky tricks.

Business Areas:
Semantic Search Internet Services

Prompt injection attacks pose a significant security threat to LLM-integrated applications. Model-level defenses have shown strong effectiveness, but are currently deployed into commercial-grade models in a closed-source manner. We believe open-source models are needed by the AI security community, where co-development of attacks and defenses through open research drives scientific progress in mitigation against prompt injection attacks. To this end, we develop Meta SecAlign, the first open-source and open-weight LLM with built-in model-level defense that achieves commercial-grade model performance. We provide complete details of our training recipe, which utilizes an improved version of the SOTA SecAlign defense. Evaluations on 9 utility benchmarks and 7 security benchmarks show that Meta SecAlign, despite being trained on a generic instruction-tuning dataset, confers security in unseen downstream tasks, including tool-calling and agentic web navigation, in addition general instruction-following. Our best model -- Meta-SecAlign-70B -- achieves state-of-the-art robustness against prompt injection attacks and comparable utility to closed-source commercial LLM with model-level defense.

Country of Origin
🇺🇸 United States


Page Count
17 pages

Category
Computer Science:
Cryptography and Security