cantnlp@DravidianLangTech2025: A Bag-of-Sounds Approach to Multimodal Hate Speech Detection
By: Sidney Wong, Andrew Li
Potential Business Impact:
Detects hateful speech in online videos and audio.
This paper presents the systems and results for the Multimodal Social Media Data Analysis in Dravidian Languages (MSMDA-DL) shared task at the Fifth Workshop on Speech, Vision, and Language Technologies for Dravidian Languages (DravidianLangTech-2025). We took a `bag-of-sounds' approach by training our hate speech detection system on the speech (audio) data using transformed Mel spectrogram measures. While our candidate model performed poorly on the test set, our approach offered promising results during training and development for Malayalam and Tamil. With sufficient and well-balanced training data, our results show that it is feasible to use both text and speech (audio) data in the development of multimodal hate speech detection systems.
Similar Papers
Multimodal Zero-Shot Framework for Deepfake Hate Speech Detection in Low-Resource Languages
Sound
Finds hate speech in fake voices, even new ones.
MM-HSD: Multi-Modal Hate Speech Detection in Videos
Multimedia
Finds hate speech in videos using sight, sound, and text.
Leveraging LLMs for Context-Aware Implicit Textual and Multimodal Hate Speech Detection
Computation and Language
Helps computers spot hateful messages better.