Towards an Automated Framework to Audit Youth Safety on TikTok
By: Linda Xue , Francesco Corso , Nicolo' Fontana and more
Potential Business Impact:
TikTok shows kids as much bad stuff as adults.
This paper investigates the effectiveness of TikTok's enforcement mechanisms for limiting the exposure of harmful content to youth accounts. We collect over 7000 videos, classify them as harmful vs not-harmful, and then simulate interactions using age-specific sockpuppet accounts through both passive and active engagement strategies. We also evaluate the performance of large language (LLMs) and vision-language models (VLMs) in detecting harmful content, identifying key challenges in precision and scalability. Preliminary results show minimal differences in content exposure between adult and youth accounts, raising concerns about the platform's age-based moderation. These findings suggest that the platform needs to strengthen youth safety measures and improve transparency in content moderation.
Similar Papers
Protecting Young Users on Social Media: Evaluating the Effectiveness of Content Moderation and Legal Safeguards on Video Sharing Platforms
Social and Information Networks
Younger kids see bad videos faster online.
MTikGuard System: A Transformer-Based Multimodal System for Child-Safe Content Moderation on TikTok
Computation and Language
Finds bad videos on TikTok fast.
Analyzing Social Media Claims regarding Youth Online Safety Features to Identify Problem Areas and Communication Gaps
Human-Computer Interaction
Social media hides how safe kids really are.