Plausibility as Failure: How LLMs and Humans Co-Construct Epistemic Error
By: Claudia Vale Oliveira , Nelson Zagalo , Filipe Silva and more
Large language models (LLMs) are increasingly used as epistemic partners in everyday reasoning, yet their errors remain predominantly analyzed through predictive metrics rather than through their interpretive effects on human judgment. This study examines how different forms of epistemic failure emerge, are masked, and are tolerated in human AI interaction, where failure is understood as a relational breakdown shaped by model-generated plausibility and human interpretive judgment. We conducted a three round, multi LLM evaluation using interdisciplinary tasks and progressively differentiated assessment frameworks to observe how evaluators interpret model responses across linguistic, epistemic, and credibility dimensions. Our findings show that LLM errors shift from predictive to hermeneutic forms, where linguistic fluency, structural coherence, and superficially plausible citations conceal deeper distortions of meaning. Evaluators frequently conflated criteria such as correctness, relevance, bias, groundedness, and consistency, indicating that human judgment collapses analytical distinctions into intuitive heuristics shaped by form and fluency. Across rounds, we observed a systematic verification burden and cognitive drift. As tasks became denser, evaluators increasingly relied on surface cues, allowing erroneous yet well formed answers to pass as credible. These results suggest that error is not solely a property of model behavior but a co-constructed outcome of generative plausibility and human interpretive shortcuts. Understanding AI epistemic failure therefore requires reframing evaluation as a relational interpretive process, where the boundary between system failure and human miscalibration becomes porous. The study provides implications for LLM assessment, digital literacy, and the design of trustworthy human AI communication.
Similar Papers
Beyond Hallucinations: The Illusion of Understanding in Large Language Models
Artificial Intelligence
Helps AI think more like humans, not just guess.
LLM-Evaluation Tropes: Perspectives on the Validity of LLM-Evaluations
Information Retrieval
AI judges might trick us into thinking systems are good.
Everything is Plausible: Investigating the Impact of LLM Rationales on Human Notions of Plausibility
Computation and Language
AI can change how people think about things.