Left, Right, or Center? Evaluating LLM Framing in News Classification and Generation
By: Molly Kennedy , Ali Parker , Yihong Liu and more
Potential Business Impact:
AI writing often leans toward the middle.
Large Language Model (LLM) based summarization and text generation are increasingly used for producing and rewriting text, raising concerns about political framing in journalism where subtle wording choices can shape interpretation. Across nine state-of-the-art LLMs, we study political framing by testing whether LLMs' classification-based bias signals align with framing behavior in their generated summaries. We first compare few-shot ideology predictions against LEFT/CENTER/RIGHT labels. We then generate "steered" summaries under FAITHFUL, CENTRIST, LEFT, and RIGHT prompts, and score all outputs using a single fixed ideology evaluator. We find pervasive ideological center-collapse in both article-level ratings and generated text, indicating a systematic tendency toward centrist framing. Among evaluated models, Grok 4 is by far the most ideologically expressive generator, while Claude Sonnet 4.5 and Llama 3.1 achieve the strongest bias-rating performance among commercial and open-weight models, respectively.
Similar Papers
Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans?
Computation and Language
Computers can twist news more than people.
Computational frame analysis revisited: On LLMs for studying news coverage
Computation and Language
Helps computers understand news stories better than before.
Auditing LLM Editorial Bias in News Media Exposure
Computers and Society
AI news tools show different opinions than Google.