Are Large Language Models Good at Detecting Propaganda?
By: Julia Jose, Rachel Greenstadt
Potential Business Impact:
Helps computers spot fake news and tricks.
Propagandists use rhetorical devices that rely on logical fallacies and emotional appeals to advance their agendas. Recognizing these techniques is key to making informed decisions. Recent advances in Natural Language Processing (NLP) have enabled the development of systems capable of detecting manipulative content. In this study, we look at several Large Language Models and their performance in detecting propaganda techniques in news articles. We compare the performance of these LLMs with transformer-based models. We find that, while GPT-4 demonstrates superior F1 scores (F1=0.16) compared to GPT-3.5 and Claude 3 Opus, it does not outperform a RoBERTa-CRF baseline (F1=0.67). Additionally, we find that all three LLMs outperform a MultiGranularity Network (MGN) baseline in detecting instances of one out of six propaganda techniques (name-calling), with GPT-3.5 and GPT-4 also outperforming the MGN baseline in detecting instances of appeal to fear and flag-waving.
Similar Papers
An Empirical Analysis of LLMs for Countering Misinformation
Computation and Language
Helps computers spot fake news, but needs improvement.
Probing the Subtle Ideological Manipulation of Large Language Models
Computation and Language
Teaches computers to understand many political ideas.
Bridging Human and Model Perspectives: A Comparative Analysis of Political Bias Detection in News Media Using Large Language Models
Computation and Language
Helps computers spot fake news bias like people.