How Malicious AI Swarms Can Threaten Democracy
By: Daniel Thilo Schroeder , Meeyoung Cha , Andrea Baronchelli and more
Potential Business Impact:
AI swarms spread fake news and trick people.
Advances in AI portend a new era of sophisticated disinformation operations. While individual AI systems already create convincing -- and at times misleading -- information, an imminent development is the emergence of malicious AI swarms. These systems can coordinate covertly, infiltrate communities, evade traditional detectors, and run continuous A/B tests, with round-the-clock persistence. The result can include fabricated grassroots consensus, fragmented shared reality, mass harassment, voter micro-suppression or mobilization, contamination of AI training data, and erosion of institutional trust. With democratic processes worldwide increasingly vulnerable, we urge a three-pronged response: (1) platform-side defenses -- always-on swarm-detection dashboards, pre-election high-fidelity swarm-simulation stress-tests, transparency audits, and optional client-side "AI shields" for users; (2) model-side safeguards -- standardized persuasion-risk tests, provenance-authenticating passkeys, and watermarking; and (3) system-level oversight -- a UN-backed AI Influence Observatory.
Similar Papers
Designing AI-Enabled Countermeasures to Cognitive Warfare
Computers and Society
AI helps stop fake news from spreading online.
Artificial intelligence and democracy: Towards digital authoritarianism or a democratic upgrade?
Computers and Society
AI changes elections, risking manipulation or improving democracy.
Explainable AI Based Diagnosis of Poisoning Attacks in Evolutionary Swarms
Artificial Intelligence
Finds bad data to stop drone teams from failing.