Simulating hashtag dynamics with networked groups of generative agents
By: Abha Jha , J. Hunter Priniski , Carolyn Steinle and more
Potential Business Impact:
Helps AI understand how stories change what people believe.
Networked environments shape how information embedded in narratives influences individual and group beliefs and behavior. This raises key questions about how group communication around narrative media impacts belief formation and how such mechanisms contribute to the emergence of consensus or polarization. Language data from generative agents offer insight into how naturalistic forms of narrative interactions (such as hashtag generation) evolve in response to social rewards within networked communication settings. To investigate this, we developed an agent-based modeling and simulation framework composed of networks of interacting Large Language Model (LLM) agents. We benchmarked the simulations of four state-of-the-art LLMs against human group behaviors observed in a prior network experiment (Study 1) and against naturally occurring hashtags from Twitter (Study 2). Quantitative metrics of network coherence (e.g., entropy of a group's responses) reveal that while LLMs can approximate human-like coherence in sanitized domains (Study 1's experimental data), effective integration of background knowledge and social context in more complex or politically sensitive narratives likely requires careful and structured prompting.
Similar Papers
Simulating Online Social Media Conversations on Controversial Topics Using AI Agents Calibrated on Real-World Data
Social and Information Networks
Computers can now pretend to be people online.
Simulating and Experimenting with Social Media Mobilization Using LLM Agents
Social and Information Networks
Shows how online friends change voting.
Simulating Misinformation Vulnerabilities With Agent Personas
Social and Information Networks
Lets computers learn how people believe fake news.