Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans?
By: Valeria Pastorino, Nafise Sadat Moosavi
Potential Business Impact:
Computers can twist news more than people.
Framing in media critically shapes public perception by selectively emphasizing some details while downplaying others. With the rise of large language models in automated news and content creation, there is growing concern that these systems may introduce or even amplify framing biases compared to human authors. In this paper, we explore how framing manifests in both out-of-the-box and fine-tuned LLM-generated news content. Our analysis reveals that, particularly in politically and socially sensitive contexts, LLMs tend to exhibit more pronounced framing than their human counterparts. In addition, we observe significant variation in framing tendencies across different model architectures, with some models displaying notably higher biases. These findings point to the need for effective post-training mitigation strategies and tighter evaluation frameworks to ensure that automated news content upholds the standards of balanced reporting.
Similar Papers
WildFrame: Comparing Framing in Humans and LLMs on Naturally Occurring Texts
Computation and Language
AI mimics human bias when information is framed.
Left, Right, or Center? Evaluating LLM Framing in News Classification and Generation
Computation and Language
AI writing often leans toward the middle.
Computational frame analysis revisited: On LLMs for studying news coverage
Computation and Language
Helps computers understand news stories better than before.