OpenGuardrails: An Open-Source Context-Aware AI Guardrails Platform
By: Thomas Wang, Haowen Li
Potential Business Impact:
Keeps AI from saying bad or harmful things.
As large language models (LLMs) become increasingly integrated into real-world applications, safeguarding them against unsafe, malicious, or privacy-violating content is critically important. We present OpenGuardrails, the first open-source project to provide both a context-aware safety and manipulation detection model and a deployable platform for comprehensive AI guardrails. OpenGuardrails protects against content-safety risks, model-manipulation attacks (e.g., prompt injection, jailbreaking, code-interpreter abuse, and the generation/execution of malicious code), and data leakage. Content-safety and model-manipulation detection are implemented by a unified large model, while data-leakage identification and redaction are performed by a separate lightweight NER pipeline (e.g., Presidio-style models or regex-based detectors). The system can be deployed as a security gateway or an API-based service, with enterprise-grade, fully private deployment options. OpenGuardrails achieves state-of-the-art (SOTA) performance on safety benchmarks, excelling in both prompt and response classification across English, Chinese, and multilingual tasks. All models are released under the Apache 2.0 license for public use.
Similar Papers
OpenGuardrails: A Configurable, Unified, and Scalable Guardrails Platform for Large Language Models
Cryptography and Security
Keeps AI from saying bad things or stealing secrets.
AdaptiveGuard: Towards Adaptive Runtime Safety for LLM-Powered Software
Cryptography and Security
Keeps AI safe from bad instructions.
SGuard-v1: Safety Guardrail for Large Language Models
Computation and Language
Keeps AI from saying bad or dangerous things.