Evaluating LLMs Robustness in Less Resourced Languages with Proxy Models
By: Maciej Chrabąszcz, Katarzyna Lorenc, Karolina Seweryn
Potential Business Impact:
Makes AI unsafe in other languages.
Large language models (LLMs) have demonstrated impressive capabilities across various natural language processing (NLP) tasks in recent years. However, their susceptibility to jailbreaks and perturbations necessitates additional evaluations. Many LLMs are multilingual, but safety-related training data contains mainly high-resource languages like English. This can leave them vulnerable to perturbations in low-resource languages such as Polish. We show how surprisingly strong attacks can be cheaply created by altering just a few characters and using a small proxy model for word importance calculation. We find that these character and word-level attacks drastically alter the predictions of different LLMs, suggesting a potential vulnerability that can be used to circumvent their internal safety mechanisms. We validate our attack construction methodology on Polish, a low-resource language, and find potential vulnerabilities of LLMs in this language. Additionally, we show how it can be extended to other languages. We release the created datasets and code for further research.
Similar Papers
A Framework to Assess Multilingual Vulnerabilities of LLMs
Computation and Language
Finds hidden dangers in languages with less data.
Evolving Security in LLMs: A Study of Jailbreak Attacks and Defenses
Cryptography and Security
Makes AI safer from bad instructions.
Exploring the Multilingual NLG Evaluation Abilities of LLM-Based Evaluators
Computation and Language
Helps computers judge writing quality in many languages.