Score: 0

Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study

Published: May 9, 2025 | arXiv ID: 2505.06149v3

By: Faeze Ghorbanpour, Daryna Dementieva, Alexander Fraser

Potential Business Impact:

Finds hate speech in many languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Despite growing interest in automated hate speech detection, most existing approaches overlook the linguistic diversity of online content. Multilingual instruction-tuned large language models such as LLaMA, Aya, Qwen, and BloomZ offer promising capabilities across languages, but their effectiveness in identifying hate speech through zero-shot and few-shot prompting remains underexplored. This work evaluates LLM prompting-based detection across eight non-English languages, utilizing several prompting techniques and comparing them to fine-tuned encoder models. We show that while zero-shot and few-shot prompting lag behind fine-tuned encoder models on most of the real-world evaluation sets, they achieve better generalization on functional tests for hate speech detection. Our study also reveals that prompt design plays a critical role, with each language often requiring customized prompting techniques to maximize performance.

Country of Origin
🇩🇪 Germany

Page Count
12 pages

Category
Computer Science:
Computation and Language