Score: 2

IndoSafety: Culturally Grounded Safety for LLMs in Indonesian Languages

Published: June 3, 2025 | arXiv ID: 2506.02573v1

By: Muhammad Falensi Azmi , Muhammad Dehan Al Kautsar , Alfan Farizki Wicaksono and more

Potential Business Impact:

Makes AI safer for Indonesian languages.

Business Areas:
Homeland Security Privacy and Security

Although region-specific large language models (LLMs) are increasingly developed, their safety remains underexplored, particularly in culturally diverse settings like Indonesia, where sensitivity to local norms is essential and highly valued by the community. In this work, we present IndoSafety, the first high-quality, human-verified safety evaluation dataset tailored for the Indonesian context, covering five language varieties: formal and colloquial Indonesian, along with three major local languages: Javanese, Sundanese, and Minangkabau. IndoSafety is constructed by extending prior safety frameworks to develop a taxonomy that captures Indonesia's sociocultural context. We find that existing Indonesian-centric LLMs often generate unsafe outputs, particularly in colloquial and local language settings, while fine-tuning on IndoSafety significantly improves safety while preserving task performance. Our work highlights the critical need for culturally grounded safety evaluation and provides a concrete step toward responsible LLM deployment in multilingual settings. Warning: This paper contains example data that may be offensive, harmful, or biased.

Country of Origin
🇦🇪 🇮🇩 Indonesia, United Arab Emirates

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Computation and Language