Semantic Laundering in AI Agent Architectures: Why Tool Boundaries Do Not Confer Epistemic Warrant
By: Oleg Romanchuk, Roman Bondar
LLM-based agent architectures systematically conflate information transport mechanisms with epistemic justification mechanisms. We formalize this class of architectural failures as semantic laundering: a pattern where propositions with absent or weak warrant are accepted by the system as admissible by crossing architecturally trusted interfaces. We show that semantic laundering constitutes an architectural realization of the Gettier problem: propositions acquire high epistemic status without a connection between their justification and what makes them true. Unlike classical Gettier cases, this effect is not accidental; it is architecturally determined and systematically reproducible. The central result is the Theorem of Inevitable Self-Licensing: under standard architectural assumptions, circular epistemic justification cannot be eliminated. We introduce the Warrant Erosion Principle as the fundamental explanation for this effect and show that scaling, model improvement, and LLM-as-judge schemes are structurally incapable of eliminating a problem that exists at the type level.
Similar Papers
Epistemological Fault Lines Between Human and Artificial Intelligence
Computers and Society
Computers can sound smart but don't truly understand.
The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
Human-Computer Interaction
Helps people think better with AI.
Theoretical Foundations for Semantic Cognition in Artificial Intelligence
Artificial Intelligence
Builds smarter robots that think and remember.