The MEVIR 2 Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
By: Daniel Schwabe
The MEVIR 2 framework innovates and improves how we understand trust decisions in our polarized information landscape. Unlike classical models assuming ideal rationality, MEVIR 2 recognizes that human trust emerges from three interacting foundations: how we process evidence procedurally, our character as epistemic agents virtue theory, and our moral intuitions shaped by both evolutionary cooperation MAC model and cultural values Extended Moral Foundations Theory. This explains why different people find different authorities, facts, and tradeoffs compelling. MEVIR 2's key innovation introduces "Truth Tribes" TTs-stable communities sharing aligned procedural, virtue, and moral epistemic profiles. These arent mere ideological groups but emergent clusters with internally coherent "trust lattices" that remain mutually unintelligible across tribal boundaries. The framework incorporates distinctions between Truth Bearers and Truth Makers, showing disagreements often stem from fundamentally different views about what aspects of reality can make propositions true. Case studies on vaccination mandates and climate policy demonstrate how different moral configurations lead people to select different authorities, evidential standards, and trust anchors-constructing separate moral epistemic worlds. The framework reinterprets cognitive biases as failures of epistemic virtue and provides foundations for designing decision support systems that could enhance metacognition, make trust processes transparent, and foster more conscientious reasoning across divided communities. MEVIR 2 thus offers both descriptive power for understanding polarization and normative guidance for bridging epistemic divides.
Similar Papers
The MEVIR Framework: A Virtue-Informed Moral-Epistemic Model of Human Trust Decisions
Computers and Society
Helps people trust true information better.
A race to belief: How Evidence Accumulation shapes trust in AI and Human informants
Human-Computer Interaction
Explains why we trust AI for facts, people for feelings.
EvalMORAAL: Interpretable Chain-of-Thought and LLM-as-Judge Evaluation for Moral Alignment in Large Language Models
Computation and Language
Checks if AI understands different cultures fairly.