Score: 0

SEA-SafeguardBench: Evaluating AI Safety in SEA Languages and Cultures

Published: December 5, 2025 | arXiv ID: 2512.05501v1

By: Panuthep Tasawong , Jian Gang Ngui , Alham Fikri Aji and more

Potential Business Impact:

Helps computers understand harmful words in many languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Safeguard models help large language models (LLMs) detect and block harmful content, but most evaluations remain English-centric and overlook linguistic and cultural diversity. Existing multilingual safety benchmarks often rely on machine-translated English data, which fails to capture nuances in low-resource languages. Southeast Asian (SEA) languages are underrepresented despite the region's linguistic diversity and unique safety concerns, from culturally sensitive political speech to region-specific misinformation. Addressing these gaps requires benchmarks that are natively authored to reflect local norms and harm scenarios. We introduce SEA-SafeguardBench, the first human-verified safety benchmark for SEA, covering eight languages, 21,640 samples, across three subsets: general, in-the-wild, and content generation. The experimental results from our benchmark demonstrate that even state-of-the-art LLMs and guardrails are challenged by SEA cultural and harm scenarios and underperform when compared to English texts.

Page Count
30 pages

Category
Computer Science:
Computation and Language