A Framework to Assess Multilingual Vulnerabilities of LLMs
By: Likai Tang , Niruth Bogahawatta , Yasod Ginige and more
Potential Business Impact:
Finds hidden dangers in languages with less data.
Large Language Models (LLMs) are acquiring a wider range of capabilities, including understanding and responding in multiple languages. While they undergo safety training to prevent them from answering illegal questions, imbalances in training data and human evaluation resources can make these models more susceptible to attacks in low-resource languages (LRL). This paper proposes a framework to automatically assess the multilingual vulnerabilities of commonly used LLMs. Using our framework, we evaluated six LLMs across eight languages representing varying levels of resource availability. We validated the assessments generated by our automated framework through human evaluation in two languages, demonstrating that the framework's results align with human judgments in most cases. Our findings reveal vulnerabilities in LRL; however, these may pose minimal risk as they often stem from the model's poor performance, resulting in incoherent responses.
Similar Papers
A Preliminary Study of Large Language Models for Multilingual Vulnerability Detection
Software Engineering
Finds computer bugs in many languages.
Exploring the Multilingual NLG Evaluation Abilities of LLM-Based Evaluators
Computation and Language
Helps computers judge writing quality in many languages.
Evaluating LLMs Robustness in Less Resourced Languages with Proxy Models
Computation and Language
Makes AI unsafe in other languages.