The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
By: Angjelin Hila
We examine epistemological threats posed by human and LLM interaction. We develop collective epistemology as a theory of epistemic warrant distributed across human collectives, using bounded rationality and dual process theory as background. We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths. Both are necessary for collective rationality, but only internalist justification produces reflective knowledge. We specify reflective knowledge as follows: agents understand the evaluative basis of a claim, when that basis is unavailable agents consistently assess the reliability of truth sources, and agents have a duty to apply these standards within their domains of competence. We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves possess reflective justification. Widespread outsourcing of reflective work to reliable LLM outputs can weaken reflective standards of justification, disincentivize comprehension, and reduce agents' capacity to meet professional and civic epistemic duties. To mitigate these risks, we propose a three tier norm program that includes an epistemic interaction model for individual use, institutional and organizational frameworks that seed and enforce norms for epistemically optimal outcomes, and deontic constraints at organizational and or legislative levels that instantiate discursive norms and curb epistemic vices.
Similar Papers
Epistemological Fault Lines Between Human and Artificial Intelligence
Computers and Society
Computers can sound smart but don't truly understand.
Plausibility as Failure: How LLMs and Humans Co-Construct Epistemic Error
Human-Computer Interaction
AI tricks people into trusting wrong answers.
A time for monsters: Organizational knowing after LLMs
Computers and Society
AI helps us learn and think in new ways.