Evaluating Prompt-Driven Chinese Large Language Models: The Influence of Persona Assignment on Stereotypes and Safeguards
By: Geng Liu , Li Feng , Carlo Alberto Bono and more
Potential Business Impact:
Makes AI say mean things about people.
Recent research has highlighted that assigning specific personas to large language models (LLMs) can significantly increase harmful content generation. Yet, limited attention has been given to persona-driven toxicity in non-Western contexts, particularly in Chinese-based LLMs. In this paper, we perform a large-scale, systematic analysis of how persona assignment influences refusal behavior and response toxicity in Qwen, a widely-used Chinese language model. Utilizing fine-tuned BERT classifiers and regression analysis, our study reveals significant gender biases in refusal rates and demonstrates that certain negative personas can amplify toxicity toward Chinese social groups by up to 60-fold compared to the default model. To mitigate this toxicity, we propose an innovative multi-model feedback strategy, employing iterative interactions between Qwen and an external evaluator, which effectively reduces toxic outputs without costly model retraining. Our findings emphasize the necessity of culturally specific analyses for LLMs safety and offer a practical framework for evaluating and enhancing ethical alignment in LLM-generated content.
Similar Papers
No for Some, Yes for Others: Persona Prompts and Other Sources of False Refusal in Language Models
Computation and Language
AI sometimes refuses requests based on fake identities.
Hateful Person or Hateful Model? Investigating the Role of Personas in Hate Speech Detection by Large Language Models
Computation and Language
Makes AI less biased when judging mean words.
Analyzing the Safety of Japanese Large Language Models in Stereotype-Triggering Prompts
Computation and Language
Makes Japanese AI less biased and safer.