DSA, AIA, and LLMs: Approaches to conceptualizing and auditing moderation in LLM-based chatbots across languages and interfaces in the electoral contexts
By: Natalia Stanusch , Raziye Buse Cetin , Salvatore Romano and more
Potential Business Impact:
Tests AI chatbots for fair election answers.
The integration of Large Language Models (LLMs) into chatbot-like search engines poses new challenges for governing, assessing, and scrutinizing the content output by these online entities, especially in light of the Digital Service Act (DSA). In what follows, we first survey the regulation landscape in which we can situate LLM-based chatbots and the notion of moderation. Second, we outline the methodological approaches to our study: a mixed-methods audit across chatbots, languages, and elections. We investigated Copilot, ChatGPT, and Gemini across ten languages in the context of the 2024 European Parliamentary Election and the 2024 US Presidential Election. Despite the uncertainty in regulatory frameworks, we propose a set of solutions on how to situate, study, and evaluate chatbot moderation.
Similar Papers
Artificial Intelligence and Civil Discourse: How LLMs Moderate Climate Change Conversations
Computers and Society
AI calms down online arguments about climate change.
Longitudinal Monitoring of LLM Content Moderation of Social Issues
Computation and Language
Tracks AI's choices to show how it shapes what we see.
Auditing LLM Editorial Bias in News Media Exposure
Computers and Society
AI news tools show different opinions than Google.