Bridging Human and Model Perspectives: A Comparative Analysis of Political Bias Detection in News Media Using Large Language Models
By: Shreya Adrita Banik , Niaz Nafi Rahman , Tahsina Moiukh and more
Potential Business Impact:
Helps computers spot fake news bias like people.
Detecting political bias in news media is a complex task that requires interpreting subtle linguistic and contextual cues. Although recent advances in Natural Language Processing (NLP) have enabled automatic bias classification, the extent to which large language models (LLMs) align with human judgment still remains relatively underexplored and not yet well understood. This study aims to present a comparative framework for evaluating the detection of political bias across human annotations and multiple LLMs, including GPT, BERT, RoBERTa, and FLAN. We construct a manually annotated dataset of news articles and assess annotation consistency, bias polarity, and inter-model agreement to quantify divergence between human and model perceptions of bias. Experimental results show that among traditional transformer-based models, RoBERTa achieves the highest alignment with human labels, whereas generative models such as GPT demonstrate the strongest overall agreement with human annotations in a zero-shot setting. Among all transformer-based baselines, our fine-tuned RoBERTa model acquired the highest accuracy and the strongest alignment with human-annotated labels. Our findings highlight systematic differences in how humans and LLMs perceive political slant, underscoring the need for hybrid evaluation frameworks that combine human interpretability with model scalability in automated media bias detection.
Similar Papers
Navigating Nuance: In Quest for Political Truth
Computation and Language
Helps computers spot fake news and bias.
Integrating Large Language Models and Knowledge Graphs to Capture Political Viewpoints in News Media
Computation and Language
Helps see if news stories show many opinions.
Large Means Left: Political Bias in Large Language Models Increases with Their Number of Parameters
Computation and Language
AI models show political bias, leaning left.