Few-shot Hate Speech Detection Based on the MindSpore Framework
By: Zhenkai Qin , Dongze Wu , Yuxin Liu and more
Potential Business Impact:
Finds hate speech with less examples.
The proliferation of hate speech on social media poses a significant threat to online communities, requiring effective detection systems. While deep learning models have shown promise, their performance often deteriorates in few-shot or low-resource settings due to reliance on large annotated corpora. To address this, we propose MS-FSLHate, a prompt-enhanced neural framework for few-shot hate speech detection implemented on the MindSpore deep learning platform. The model integrates learnable prompt embeddings, a CNN-BiLSTM backbone with attention pooling, and synonym-based adversarial data augmentation to improve generalization. Experimental results on two benchmark datasets-HateXplain and HSOL-demonstrate that our approach outperforms competitive baselines in precision, recall, and F1-score. Additionally, the framework shows high efficiency and scalability, suggesting its suitability for deployment in resource-constrained environments. These findings highlight the potential of combining prompt-based learning with adversarial augmentation for robust and adaptable hate speech detection in few-shot scenarios.
Similar Papers
Multimodal Zero-Shot Framework for Deepfake Hate Speech Detection in Low-Resource Languages
Sound
Finds hate speech in fake voices, even new ones.
Can Prompting LLMs Unlock Hate Speech Detection across Languages? A Zero-shot and Few-shot Study
Computation and Language
Finds hate speech in many languages.
Labels or Input? Rethinking Augmentation in Multimodal Hate Detection
CV and Pattern Recognition
Finds mean memes by looking at pictures and words.