The Hidden Language of Harm: Examining the Role of Emojis in Harmful Online Communication and Content Moderation
By: Yuhang Zhou , Yimin Xiao , Wei Ai and more
Potential Business Impact:
Changes bad emojis to good ones online.
Social media platforms have become central to modern communication, yet they also harbor offensive content that challenges platform safety and inclusivity. While prior research has primarily focused on textual indicators of offense, the role of emojis, ubiquitous visual elements in online discourse, remains underexplored. Emojis, despite being rarely offensive in isolation, can acquire harmful meanings through symbolic associations, sarcasm, and contextual misuse. In this work, we systematically examine emoji contributions to offensive Twitter messages, analyzing their distribution across offense categories and how users exploit emoji ambiguity. To address this, we propose an LLM-powered, multi-step moderation pipeline that selectively replaces harmful emojis while preserving the tweet's semantic intent. Human evaluations confirm our approach effectively reduces perceived offensiveness without sacrificing meaning. Our analysis also reveals heterogeneous effects across offense types, offering nuanced insights for online communication and emoji moderation.
Similar Papers
When Smiley Turns Hostile: Interpreting How Emojis Trigger LLMs' Toxicity
Computation and Language
Emojis trick computers into saying bad things.
On the Impact of Language Nuances on Sentiment Analysis with Large Language Models: Paraphrasing, Sarcasm, and Emojis
Computation and Language
Makes computers understand feelings in texts better.
Just a Scratch: Enhancing LLM Capabilities for Self-harm Detection through Intent Differentiation and Emoji Interpretation
Computation and Language
Helps computers spot sad posts online.