A Mathematical Theory of Discursive Networks
By: Juan B. Gutiérrez
Potential Business Impact:
Connects AI and people to find and fix mistakes.
Large language models (LLMs) turn writing into a live exchange between humans and software. We characterize this new medium as a discursive network that treats people and LLMs as equal nodes and tracks how their statements circulate. We define the generation of erroneous information as invalidation (any factual, logical, or structural breach) and show it follows four hazards: drift from truth, self-repair, fresh fabrication, and external detection. We develop a general mathematical model of discursive networks that shows that a network governed only by drift and self-repair stabilizes at a modest error rate. Giving each false claim even a small chance of peer review shifts the system to a truth-dominant state. We operationalize peer review with the open-source Flaws-of-Others (FOO) algorithm: a configurable loop in which any set of agents critique one another while a harmonizer merges their verdicts. We identify an ethical transgression, epithesis, that occurs when humans fail to engage in the discursive network. The takeaway is practical and cultural: reliability in this new medium comes not from perfecting single models but from connecting imperfect ones into networks that enforce mutual accountability.
Similar Papers
Simulating Misinformation Propagation in Social Networks using Large Language Models
Social and Information Networks
Finds how fake news spreads and how to stop it.
Disrupting Networks: Amplifying Social Dissensus via Opinion Perturbation and Large Language Models
Social and Information Networks
Makes social media spread fake news faster.
Information Diffusion and Preferential Attachment in a Network of Large Language Models
Social and Information Networks
Makes AI answer questions truthfully, not make things up.