Large-Scale, Longitudinal Study of Large Language Models During the 2024 US Election Season
By: Sarah H. Cen , Andrew Ilyas , Hedi Driss and more
Potential Business Impact:
Tests AI's election answers and biases.
The 2024 US presidential election is the first major contest to occur in the US since the popularization of large language models (LLMs). Building on lessons from earlier shifts in media (most notably social media's well studied role in targeted messaging and political polarization) this moment raises urgent questions about how LLMs may shape the information ecosystem and influence political discourse. While platforms have announced some election safeguards, how well they work in practice remains unclear. Against this backdrop, we conduct a large-scale, longitudinal study of 12 models, queried using a structured survey with over 12,000 questions on a near-daily cadence from July through November 2024. Our design systematically varies content and format, resulting in a rich dataset that enables analyses of the models' behavior over time (e.g., across model updates), sensitivity to steering, responsiveness to instructions, and election-related knowledge and "beliefs." In the latter half of our work, we perform four analyses of the dataset that (i) study the longitudinal variation of model behavior during election season, (ii) illustrate the sensitivity of election-related responses to demographic steering, (iii) interrogate the models' beliefs about candidates' attributes, and (iv) reveal the models' implicit predictions of the election outcome. To facilitate future evaluations of LLMs in electoral contexts, we detail our methodology, from question generation to the querying pipeline and third-party tooling. We also publicly release our dataset at https://huggingface.co/datasets/sarahcen/llm-election-data-2024
Similar Papers
A Framework to Assess the Persuasion Risks Large Language Model Chatbots Pose to Democratic Societies
Computation and Language
Computers can now convince voters cheaper than ads.
Benchmarking Gender and Political Bias in Large Language Models
Computation and Language
Finds AI bias in political speech and voting.
Benchmarking Gender and Political Bias in Large Language Models
Computation and Language
Finds AI bias in political speeches and votes.