Epistemological Fault Lines Between Human and Artificial Intelligence
By: Walter Quattrociocchi, Valerio Capraro, Matjaž Perc
Potential Business Impact:
Computers can sound smart but don't truly understand.
Large language models (LLMs) are widely described as artificial intelligence, yet their epistemic profile diverges sharply from human cognition. Here we show that the apparent alignment between human and machine outputs conceals a deeper structural mismatch in how judgments are produced. Tracing the historical shift from symbolic AI and information filtering systems to large-scale generative transformers, we argue that LLMs are not epistemic agents but stochastic pattern-completion systems, formally describable as walks on high-dimensional graphs of linguistic transitions rather than as systems that form beliefs or models of the world. By systematically mapping human and artificial epistemic pipelines, we identify seven epistemic fault lines, divergences in grounding, parsing, experience, motivation, causal reasoning, metacognition, and value. We call the resulting condition Epistemia: a structural situation in which linguistic plausibility substitutes for epistemic evaluation, producing the feeling of knowing without the labor of judgment. We conclude by outlining consequences for evaluation, governance, and epistemic literacy in societies increasingly organized around generative AI.
Similar Papers
Plausibility as Failure: How LLMs and Humans Co-Construct Epistemic Error
Human-Computer Interaction
AI tricks people into trusting wrong answers.
Epistemoverse: Toward an AI-Driven Knowledge Metaverse for Intellectual Heritage Preservation
Human-Computer Interaction
AI can now think and create new ideas.
Model of human cognition
Artificial Intelligence
Builds smarter, cheaper AI that we can understand.