Mapping Toxic Comments Across Demographics: A Dataset from German Public Broadcasting
By: Jan Fillies , Michael Peter Hoffmann , Rebecca Reichel and more
Potential Business Impact:
Helps online spaces understand age differences in bad talk.
A lack of demographic context in existing toxic speech datasets limits our understanding of how different age groups communicate online. In collaboration with funk, a German public service content network, this research introduces the first large-scale German dataset annotated for toxicity and enriched with platform-provided age estimates. The dataset includes 3,024 human-annotated and 30,024 LLM-annotated anonymized comments from Instagram, TikTok, and YouTube. To ensure relevance, comments were consolidated using predefined toxic keywords, resulting in 16.7\% labeled as problematic. The annotation pipeline combined human expertise with state-of-the-art language models, identifying key categories such as insults, disinformation, and criticism of broadcasting fees. The dataset reveals age-based differences in toxic speech patterns, with younger users favoring expressive language and older users more often engaging in disinformation and devaluation. This resource provides new opportunities for studying linguistic variation across demographics and supports the development of more equitable and age-aware content moderation systems.
Similar Papers
Defining, Understanding, and Detecting Online Toxicity: Challenges and Machine Learning Approaches
Computation and Language
Finds and stops bad online words.
ToxicTAGS: Decoding Toxic Memes with Rich Tag Annotations
CV and Pattern Recognition
Helps stop mean memes online.
A Multi-Task Benchmark for Abusive Language Detection in Low-Resource Settings
Computation and Language
Helps Tigrinya speakers fight online hate speech.