Generative AI collective behavior needs an interactionist paradigm
By: Laura Ferrarotti , Gian Maria Campedelli , Roberto Dessì and more
Potential Business Impact:
Helps AI learn how to work together safely.
In this article, we argue that understanding the collective behavior of agents based on large language models (LLMs) is an essential area of inquiry, with important implications in terms of risks and benefits, impacting us as a society at many levels. We claim that the distinctive nature of LLMs--namely, their initialization with extensive pre-trained knowledge and implicit social priors, together with their capability of adaptation through in-context learning--motivates the need for an interactionist paradigm consisting of alternative theoretical foundations, methodologies, and analytical tools, in order to systematically examine how prior knowledge and embedded values interact with social context to shape emergent phenomena in multi-agent generative AI systems. We propose and discuss four directions that we consider crucial for the development and deployment of LLM-based collectives, focusing on theory, methods, and trans-disciplinary dialogue.
Similar Papers
In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge
Human-Computer Interaction
AI learns by talking, not just storing facts.
Interactionalism: Re-Designing Higher Learning for the Large Language Agent Era
Human-Computer Interaction
Teaches people to work with smart computer helpers.
Generative Artificial Intelligence and Agents in Research and Teaching
Computers and Society
Helps computers create text, art, and ideas.