Social bias is prevalent in user reports of hate and abuse online
By: Florence E. Enock, Helen Z. Margetts, Jonathan Bright
Potential Business Impact:
People flag hate more when it targets their own group.
The prevalence of online hate and abuse is a pressing global concern. While tackling such societal harms is a priority for research across the social sciences, it is a difficult task, in part because of the magnitude of the problem. User engagement with reporting mechanisms (flagging) online is an increasingly important part of monitoring and addressing harmful content at scale. However, users may not flag content routinely enough, and when they do engage, they may be biased by group identity and political beliefs. Across five well-powered and pre-registered online experiments, we examine the extent of social bias in the flagging of hate and abuse in four different intergroup contexts: political affiliation, vaccination opinions, beliefs about climate change, and stance on abortion rights. Overall, participants reported abuse reliably, with approximately half of the abusive comments in each study reported. However, a pervasive social bias was present whereby ingroup-directed abuse was consistently flagged to a greater extent than outgroup-directed abuse. Our findings offer new insights into the nature of user flagging online, an understanding of which is crucial for enhancing user intervention against online hate speech and thus ensuring a safer online environment.
Similar Papers
Hate in the Time of Algorithms: Evidence on Online Behavior from a Large-Scale Experiment
General Economics
Removes harmful posts, but users find them elsewhere.
Socially-Informed Content Analysis of Online Human Behavior
Social and Information Networks
Helps make online talk less angry and more helpful.
Modelling the Spread of Toxicity and Exploring its Mitigation on Online Social Networks
Social and Information Networks
Bots reduce online hate speech by changing its message.