In Dialogue with Intelligence: Rethinking Large Language Models as Collective Knowledge
By: Eleni Vasilaki
Potential Business Impact:
AI learns by talking, not just storing facts.
Large Language Models (LLMs) are typically analysed through architectural, behavioural, or training-data lenses. This article offers a theoretical and experiential re-framing: LLMs as dynamic instantiations of Collective human Knowledge (CK), where intelligence is evoked through dialogue rather than stored statically. Drawing on concepts from neuroscience and AI, and grounded in sustained interaction with ChatGPT-4, I examine emergent dialogue patterns, the implications of fine-tuning, and the notion of co-augmentation: mutual enhancement between human and machine cognition. This perspective offers a new lens for understanding interaction, representation, and agency in contemporary AI systems.
Similar Papers
Generative AI collective behavior needs an interactionist paradigm
Artificial Intelligence
Helps AI learn how to work together safely.
Learning Through Dialogue: Unpacking the Dynamics of Human-LLM Conversations on Political Issues
Computation and Language
Makes learning from AI more effective for you.
DiscussLLM: Teaching Large Language Models When to Speak
Computation and Language
AI learns to talk when it has something useful to say.