QGuard:Question-based Zero-shot Guard for Multi-modal LLM Safety
By: Taegyeong Lee , Jeonghwa Yoo , Hyoungseo Cho and more
Potential Business Impact:
Stops smart computers from being tricked.
The recent advancements in Large Language Models(LLMs) have had a significant impact on a wide range of fields, from general domains to specialized areas. However, these advancements have also significantly increased the potential for malicious users to exploit harmful and jailbreak prompts for malicious attacks. Although there have been many efforts to prevent harmful prompts and jailbreak prompts, protecting LLMs from such malicious attacks remains an important and challenging task. In this paper, we propose QGuard, a simple yet effective safety guard method, that utilizes question prompting to block harmful prompts in a zero-shot manner. Our method can defend LLMs not only from text-based harmful prompts but also from multi-modal harmful prompt attacks. Moreover, by diversifying and modifying guard questions, our approach remains robust against the latest harmful prompts without fine-tuning. Experimental results show that our model performs competitively on both text-only and multi-modal harmful datasets. Additionally, by providing an analysis of question prompting, we enable a white-box analysis of user inputs. We believe our method provides valuable insights for real-world LLM services in mitigating security risks associated with harmful prompts.
Similar Papers
LiteLMGuard: Seamless and Lightweight On-Device Prompt Filtering for Safeguarding Small Language Models against Quantization-induced Risks and Vulnerabilities
Cryptography and Security
Keeps small AI from saying bad things.
MrGuard: A Multilingual Reasoning Guardrail for Universal LLM Safety
Computation and Language
Keeps AI safe from bad words in any language.
Evaluating the Robustness of Large Language Model Safety Guardrails Against Adversarial Attacks
Cryptography and Security
Makes AI safer from bad instructions.