Toxicity Red-Teaming: Benchmarking LLM Safety in Singapore's Low-Resource Languages
By: Yujia Hu , Ming Shan Hee , Preslav Nakov and more
Potential Business Impact:
Makes AI safer for different languages.
The advancement of Large Language Models (LLMs) has transformed natural language processing; however, their safety mechanisms remain under-explored in low-resource, multilingual settings. Here, we aim to bridge this gap. In particular, we introduce \textsf{SGToxicGuard}, a novel dataset and evaluation framework for benchmarking LLM safety in Singapore's diverse linguistic context, including Singlish, Chinese, Malay, and Tamil. SGToxicGuard adopts a red-teaming approach to systematically probe LLM vulnerabilities in three real-world scenarios: \textit{conversation}, \textit{question-answering}, and \textit{content composition}. We conduct extensive experiments with state-of-the-art multilingual LLMs, and the results uncover critical gaps in their safety guardrails. By offering actionable insights into cultural sensitivity and toxicity mitigation, we lay the foundation for safer and more inclusive AI systems in linguistically diverse environments.\footnote{Link to the dataset: https://github.com/Social-AI-Studio/SGToxicGuard.} \textcolor{red}{Disclaimer: This paper contains sensitive content that may be disturbing to some readers.}
Similar Papers
LinguaSafe: A Comprehensive Multilingual Safety Benchmark for Large Language Models
Computation and Language
Makes AI safe for all languages.
Combating Toxic Language: A Review of LLM-Based Strategies for Software Engineering
Machine Learning (CS)
Cleans up harmful words in computer code.
LinguaSafe: A Comprehensive Multilingual Safety Benchmark for Large Language Models
Computation and Language
Tests AI language safety in many languages.