Evaluating the Sensitivity of LLMs to Harmful Contents in Long Input
By: Faeze Ghorbanpour, Alexander Fraser
Potential Business Impact:
AI models miss bad words in long texts.
Large language models (LLMs) increasingly support applications that rely on extended context, from document processing to retrieval-augmented generation. While their long-context capabilities are well studied for reasoning and retrieval, little is known about their behavior in safety-critical scenarios. We evaluate LLMs' sensitivity to harmful content under extended context, varying type (explicit vs. implicit), position (beginning, middle, end), prevalence (0.01-0.50 of the prompt), and context length (600-6000 tokens). Across harmful content categories such as toxic, offensive, and hate speech, with LLaMA-3, Qwen-2.5, and Mistral, we observe similar patterns: performance peaks at moderate harmful prevalence (0.25) but declines when content is very sparse or dominant; recall decreases with increasing context length; harmful sentences at the beginning are generally detected more reliably; and explicit content is more consistently recognized than implicit. These findings provide the first systematic view of how LLMs prioritize and calibrate harmful content in long contexts, highlighting both their emerging strengths and the challenges that remain for safety-critical use.
Similar Papers
LLM-based Semantic Augmentation for Harmful Content Detection
Computation and Language
Cleans internet text to fight bad posts.
Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation
Computation and Language
Makes AI safer and less likely to say bad things.
Guardians and Offenders: A Survey on Harmful Content Generation and Safety Mitigation of LLM
Computation and Language
Makes AI safer and less likely to say bad things.