Content Moderation in TV Search: Balancing Policy Compliance, Relevance, and User Experience
By: Adeep Hande , Kishorekumar Sundararajan , Sardar Hamidian and more
Potential Business Impact:
Keeps search results from showing bad or wrong stuff.
Millions of people rely on search functionality to find and explore content on entertainment platforms. Modern search systems use a combination of candidate generation and ranking approaches, with advanced methods leveraging deep learning and LLM-based techniques to retrieve, generate, and categorize search results. Despite these advancements, search algorithms can still surface inappropriate or irrelevant content due to factors like model unpredictability, metadata errors, or overlooked design flaws. Such issues can misalign with product goals and user expectations, potentially harming user trust and business outcomes. In this work, we introduce an additional monitoring layer using Large Language Models (LLMs) to enhance content moderation. This additional layer flags content if the user did not intend to search for it. This approach serves as a baseline for product quality assurance, with collected feedback used to refine the initial retrieval mechanisms of the search model, ensuring a safer and more reliable user experience.
Similar Papers
Re-ranking Using Large Language Models for Mitigating Exposure to Harmful Content on Social Media Platforms
Computation and Language
Stops bad stuff from showing up online.
Towards Safer Social Media Platforms: Scalable and Performant Few-Shot Harmful Content Moderation Using Large Language Models
Computation and Language
AI spots bad online posts better than humans.
Longitudinal Monitoring of LLM Content Moderation of Social Issues
Computation and Language
Tracks AI's choices to show how it shapes what we see.