Score: 3

Towards an Automated Framework to Audit Youth Safety on TikTok

Published: September 6, 2025 | arXiv ID: 2509.05838v1

By: Linda Xue , Francesco Corso , Nicolo' Fontana and more

BigTech Affiliations: Massachusetts Institute of Technology

Potential Business Impact:

TikTok shows kids as much bad stuff as adults.

Business Areas:
Teenagers Community and Lifestyle

This paper investigates the effectiveness of TikTok's enforcement mechanisms for limiting the exposure of harmful content to youth accounts. We collect over 7000 videos, classify them as harmful vs not-harmful, and then simulate interactions using age-specific sockpuppet accounts through both passive and active engagement strategies. We also evaluate the performance of large language (LLMs) and vision-language models (VLMs) in detecting harmful content, identifying key challenges in precision and scalability. Preliminary results show minimal differences in content exposure between adult and youth accounts, raising concerns about the platform's age-based moderation. These findings suggest that the platform needs to strengthen youth safety measures and improve transparency in content moderation.

Country of Origin
🇮🇹 🇺🇸 United States, Italy

Repos / Data Links

Page Count
7 pages

Category
Computer Science:
Computers and Society