Large Language Model-Informed Feature Discovery Improves Prediction and Interpretation of Credibility Perceptions of Visual Content
By: Yilang Peng , Sijia Qian , Yingdan Lu and more
Potential Business Impact:
Helps spot fake online pictures and news.
In today's visually dominated social media landscape, predicting the perceived credibility of visual content and understanding what drives human judgment are crucial for countering misinformation. However, these tasks are challenging due to the diversity and richness of visual features. We introduce a Large Language Model (LLM)-informed feature discovery framework that leverages multimodal LLMs, such as GPT-4o, to evaluate content credibility and explain its reasoning. We extract and quantify interpretable features using targeted prompts and integrate them into machine learning models to improve credibility predictions. We tested this approach on 4,191 visual social media posts across eight topics in science, health, and politics, using credibility ratings from 5,355 crowdsourced workers. Our method outperformed zero-shot GPT-based predictions by 13 percent in R2, and revealed key features like information concreteness and image format. We discuss the implications for misinformation mitigation, visual credibility, and the role of LLMs in social science.
Similar Papers
A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models
Computation and Language
Helps computers tell if online messages change minds.
ThumbnailTruth: A Multi-Modal LLM Approach for Detecting Misleading YouTube Thumbnails Across Diverse Cultural Settings
Social and Information Networks
Finds fake video pictures that trick you.
Large Language Models and Provenance Metadata for Determining the Relevance of Images and Videos in News Stories
Computation and Language
Finds fake news by checking text and pictures.